Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 

How AI will revolutionize the practice of law — from brookings.edu by John Villasenor

Artificial intelligence (AI) is poised to fundamentally reshape the practice of law. 

Excerpt:

BROADENING ACCESS TO LEGAL SERVICES
AI also has the potential to dramatically broaden access to legal services, which are prohibitively expensive for many individuals and small businesses. As the Center for American Progress has written, “[p]romoting equal, meaningful access to legal representation in the U.S. justice system is critical to ending poverty, combating discrimination, and creating opportunity.”

AI will make it much less costly to initiate and pursue litigation. For instance, it is now possible with one click to automatically generate a 1000-word lawsuit against robocallers. More generally, drafting a well-written complaint will require more than a single click, but in some scenarios, not much more. These changes will make it much easier for law firms to expand services to lower-income clients.

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 

This AR Art App Helps You Paint Giant Murals — from vrscout.com by Kyle Melnick

This AR Art App Helps You Paint Giant Murals

Here’s another interesting item along the lines of emerging technologies:

AR-Powered Flashcards Offer A Fresh Spin On Learning — from vrscout.com by Kyle Melnick

Undergraduates Justin Nappi and Sudiksha Mallick developed SmartCards -- a new type of AR-powered flashcard

Excerpt:

Each SmartCard features a special marker that, when scanned with a tablet, unlocks informative virtual content students can interact with using basic hand gestures and buttons. According to its developers, Justin Nappi and Sudiksha Mallick, SmartCards can be especially useful for neurodivergent students, including those with attention-deficit/hyperactivity disorder (ADHD), autism, or dyslexia.

 

HOW DUOLINGO’S AI LEARNS WHAT YOU NEED TO LEARN — from spectrum.ieee.org by Klinton Bicknell, Claire Brust, and Burr Settles
The AI that powers the language-learning app today could disrupt education tomorrow

Excerpt:

It’s lunchtime when your phone pings you with a green owl who cheerily reminds you to “Keep Duo Happy!” It’s a nudge from Duolingo, the popular language-learning app, whose algorithms know you’re most likely to do your 5 minutes of Spanish practice at this time of day. The app chooses its notification words based on what has worked for you in the past and the specifics of your recent achievements, adding a dash of attention-catching novelty. When you open the app, the lesson that’s queued up is calibrated for your skill level, and it includes a review of some words and concepts you flubbed during your last session.

The AI systems we continue to refine are necessary to scale the learning experience beyond the more than 50 million active learners who currently complete about 1 billion exercises per day on the platform.

Although Duolingo is known as a language-learning app, the company’s ambitions go further. We recently launched apps covering childhood literacy and third-grade mathematics, and these expansions are just the beginning. We hope that anyone who wants help with academic learning will one day be able to turn to the friendly green owl in their pocket who hoots at them, “Ready for your daily lesson?”


Also relevant/see:

GPT-4 deepens the conversation on Duolingo

Duolingo turned to OpenAI’s GPT-4 to advance the product with two new features: Role Play, an AI conversation partner, and Explain my Answer, which breaks down the rules when you make a mistake, in a new subscription tier called Duolingo Max. 

“We wanted AI-powered features that were deeply integrated into the app and leveraged the gamified aspect of Duolingo that our learners love,” says Bodge.


Also relevant/see:

The following is a quote from Donald Clark’s posting on LinkedIn.com today:

The whole idea of AI as a useful teacher is here. Honestly it’s astounding. They have provided a Socratic approach to an algebra problem that is totally on point. Most people learn in the absence of a teacher or lecturer. They need constant scaffolding, someone to help them move forward, with feedback. This changes our whole relationship with what we need to know, and how we get to know it. Its reasoning ability is also off the scale.

We now have human teachers, human learners but also AI teachers and AI that learns. It used to be a diad, it is now a tetrad – that is the basis of the new pedAIgogy.

Personalised, tutor-led learning, in any subject, anywhere, at any time for anyone. That has suddenly become real.

Also relevant/see:

Introducing Duolingo Max, a learning experience powered by GPT-4 — from blog.duolingo.com

Excerpts:

We believe that AI and education make a great duo, and we’ve leveraged AI to help us deliver highly-personalized language lessons, affordable and accessible English proficiency testing, and more. Our mission to make high-quality education available to everyone in the world is made possible by advanced AI technology.

Explain My Answer offers learners the chance to learn more about their response in a lesson (whether their answer was correct or incorrect!)

Roleplay allows learners to practice real-world conversation skills with world characters in the app.

 

ChatGPT as a teaching tool, not a cheating tool — from timeshighereducation.com by Jennifer Rose
How to use ChatGPT as a tool to spur students’ inner feedback and thus aid their learning and skills development

Excerpt:

Use ChatGPT to spur student’s inner feedback
One way that ChatGPT answers can be used in class is by asking students to compare what they have written with a ChatGPT answer. This draws on David Nicol’s work on making inner feedback explicit and using comparative judgement. His work demonstrates that in writing down answers to comparative questions students can produce high-quality feedback for themselves which is instant and actionable. Applying this to a ChatGPT answer, the following questions could be used:

  • Which is better, the ChatGPT response or yours? Why?
  • What two points can you learn from the ChatGPT response that will help you improve your work?
  • What can you add from your answer to improve the ChatGPT answer?
  • How could the assignment question set be improved to allow the student to demonstrate higher-order skills such as critical thinking?
  • How can you use what you have learned to stay ahead of AI and produce higher-quality work than ChatGPT?
 

…which links to openai.com/research/gpt-4


Also relevant/see:

See the recording from the GPT-4 Developer Demo

See the recording from the GPT-4 Developer Demo

About GPT-4
GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities.

You can learn more through:

  • Overview page of GPT-4 and what early customers have built on top of the model.
  • Blog post with details on the model’s capabilities and limitations, including eval results.

From DSC:
I do hope that the people building all of this are taking enough time to ask, “What might humans do with these emerging technologies — both positively AND negatively?” And then put some guard rails around things.


Also relevant/see:

 

Exploring generative AI and the implications for universities — from universityworldnews.com

Excerpt:

This is part of a weekly University World News special report series on ‘AI and higher education’. The focus is on how universities are engaging with ChatGPT and other generative artificial intelligence tools. The articles from academics and our journalists around the world are exploring developments and university work in AI that have implications for higher education institutions and systems, students and staff, and teaching, learning and research.

AI and higher education -- a report from University World News

 

Fostering sustainable learning ecosystems — from linkedin.com by Patrick Blessinger

Excerpt (emphasis DSC):

Learning ecosystems
As today’s global knowledge society becomes increasingly interconnected and begins to morph into a global learning society, it is likely that formal, nonformal, and informal learning will become increasingly interconnected. For instance, there has been an explosion of new self-directed e-learning platforms such as Khan Academy, Open Courseware, and YouTube, among others, that help educate billions of people around the world.

A learning ecosystem includes all the elements that contribute to a learner’s overall learning experience. The components of a learning ecosystem are numerous, including people, technology platforms, knowledge bases, culture, governance, strategy, and other internal and external elements that have an impact on learning. Therefore, moving forward, it is crucial to integrate learning across formal, nonformal, and informal learning processes and activities in a more strategic way.

Learning ecosystems -- formal, informal, and nonformal sources of learning will become more tightly integrated in the future

 

Working To Incorporate Legal Technology Into Your Practice Isn’t Just A Great Business Move – It’s Required — from abovethelaw.com by Chris Williams

Excerpt:

According to Model Rule 1.1 of the ABA Model Rules of Professional Conduct: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

In 2012, the ABA House of Delegates voted to amend Comment 8 to Model Rule 1.1 to include explicit guidance on lawyers’ use of technology.

If Model Rule 1.1 isn’t enough of a motivator to dip your feet in legal tech, maybe paying off that mortgage is. As an extra bit of motivation, it may benefit you to pin the ABA House of Delegate’s call to action on your motivation board.

Also relevant/see:

While courts still use fax machines, law firms are using AI to tailor arguments for judges — from cbc.ca by Robyn Schleihauf

Excerpt (emphasis DSC):

What is different with AI is the scale by which this knowledge is aggregated. While a lawyer who has been before a judge three or four times may have formed some opinions about them, these opinions are based on anecdotal evidence. AI can read the judge’s entire history of decision-making and spit out an argument based on what it finds. 

The common law has always used precedents, but what is being used here is different — it’s figuring out how a judge likes an argument to be framed, what language they like using, and feeding it back to them.

And because the legal system builds on itself — with judges using prior cases to determine how a decision should be made in the case before them — these AI-assisted arguments from lawyers could have the effect of further entrenching a judge’s biases in the case law, as the judge’s words are repeated verbatim in more and more decisions. This is particularly true if judges are unaware of their own biases.

Cutting through the noise: The impact of GPT/large language models (and what it means for legal tech vendors) — from legaltechnology.com by Caroline Hill

Excerpts:

Given that we have spent time over the past few years telling people not to get to overestimate the capability of AI, is this the real deal?

“Yeah, I think it’s the real thing because if you look at why legal technologies have not had the adoption rate historically, language has always been the problem,” Katz said. “Language has been hard for machines historically to work with, and language is core to law. Every road leads to a document, essentially.”

Katz says: “There are two types of things here. They would call general models GPT one through four, and then there’s domain models, so a legal specific large language model.

“What we’re going to see are large-ish, albeit not the largest model that’s heavily domain tailored is going to beat a general model in the same way that a really smart person can’t beat a super specialist. That’s where the value creation and the next generation of legal technology is going to live.”

Fresh Voices in Legal Tech with Kristen Sonday — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell with Kristen Sonday

In a brand new interview series, Dennis and Tom welcome Kristen Sonday to hear her perspectives on the latest developments in the legal tech world.

 

The Librarian: Can we prompt ChatGPT to generate reliable references? — from drphilippahardman.substack.com by Dr. Philippa Hardman

Lessons Learned

  • Always assume that ChatGPT is wrong until you prove otherwise.
  • Validate everything (and require your students to validate everything too).
  • Google Scholar is a great tool for validating ChatGPT outputs rapidly.
  • The prompt works better when you provide a subject area, e.g. visual anthropology, and then a sub-topic, e.g. film making.
  • Ignore ChatGPT’s links – validate by searching for titles & authors, not URLs.
  • Use intentional repetition, e.g. of Google Scholar, to focus ChatGPT’s attention.
  • Be aware: ChatGPT’s outputs end at 2021. You need to fill in the blanks since then.
 

ChatGPT is Everywhere — from chronicle.com by Beth McMurtrie
Love it or hate it, academics can’t ignore the already pervasive technology.

Excerpt:

Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

“Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.”

Sarah Eaton, associate professor of education at the University of Calgary



Artificial intelligence and academic integrity, post-plagiarism — from universityworldnews.com Sarah Elaine Eaton; with thanks to Robert Gibson out on LinkedIn for the resource

Excerpt:

The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.

Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.

When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.

There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.

From DSC:
These couple of sentences…

In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.

…reminded me of what’s been happening within the filmmaking world for years (i.e., such as in Star Wars, Jurrasic Park, and many others). It’s often hard to tell what’s real and what’s been generated by a computer.
 

What Can A.I. Art Teach Us About the Real Thing? — from newyorker.com by Adam Gopnik; with thanks to Mrs. Julie Bender for this resource
The range and ease of pictorial invention offered by A.I. image generation are startling.

Excerpts:

The dall-e 2 system, by setting images free from neat, argumentative intentions, reducing them to responses to “prompts,” reminds us that pictures exist in a different world of meaning from prose.

And the power of images lies less in their arguments than in their ambiguities. That’s why the images that dall-e 2 makes are far more interesting than the texts that A.I. chatbots make. To be persuasive, a text demands a point; in contrast, looking at pictures, we can be fascinated by atmospheres and uncertainties.

One of the things that thinking machines have traditionally done is sharpen our thoughts about our own thinking.

And, so, “A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor”:

A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor
Art work by DALL-E 2 / Courtesy OpenAI

It is, as simple appreciation used to say, almost like being there, almost like her being there. Our means in art are mixed, but our motives are nearly always memorial. We want to keep time from passing and our loves alive. The mechanical collision of kinds first startles our eyes and then softens our hearts. It’s the secret system of art.

 

FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance

Excerpt:

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.

From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.

 

‘ChatGPT Already Outperforms a lot of Junior Lawyers’: An Interview With Richard Susskind — from law.com by Laura Beveridge
For the last 20 years, the U.K. author and academic has been predicting that technology will revolutionise the legal industry. With the buzz around generative AI, will his hypothesis now be proven true?

Excerpts:

For this generation of lawyers, their mission and legacy ought to be to build the systems that replace our old ways of working, he said. Moreover, Susskind identified new work for lawyers, such as legal process analyst or legal data scientist, emerging from technological advancement.

“These are the people who will be building the systems that will be solving people’s legal problems in the future.

“The question I ask is: imagine when the underpinning large language model is GPT 8.5.”

Blue J Legal co-founder Benjamin Alarie on how AI is powering a new generation of legal tech — from canadianlawyermag.com by Tim Wilbur

Excerpts:

We founded Blue J with the idea that we should be able to bring absolute clarity to the law everywhere and on demand. The name that we give to this idea is the legal singularity. I have a book with assistant professor Abdi Aidid called The Legal Singularity coming out soon on this idea.

The book paints the picture of where we think the law will go in the next several decades. Our intuition was not widely shared when we started the book and Blue J.

Since last November, though, many lawyers and journalists have been able to play with ChatGPT and other large language models. They suddenly understand what we have been excited about for the last eight years.

Neat Trick/Tip to Add To Your Bag! — from iltanet.org by Brian Balistreri

Excerpt:

If you need instant transcription of a Audio File, Word Online now allows you to upload a file, and it will transcribe, mark speaker changes, and provide time marks. You can use video files, just make sure they are small or office will kick you out.

Generative AI Is Coming For the Lawyers — from wired.com by Chris Stoken-Walker
Large law firms are using a tool made by OpenAI to research and write legal documents. What could go wrong?

Excerpts:

The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.

“I think it is the beginning of a paradigm shift,” says Wakeling. “I think this technology is very suitable for the legal industry.”

The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.

“Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”

But the problems with current generations of generative AI have already started to show.

 
© 2022 | Daniel Christian