AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
The defensibility of these AI capabilities as stand-alone companies will rely on data moats, privacy preferences for consumers and enterprises, developer ecosystems, and GTM advantages. (still brewing, but let’s discuss)
…
As I suggested in Edition 1, ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?
The return of the Socratic method, at scale and on-demand…
The art and science of prompt engineering…
The bar for teaching will rise, as traditional research for paper-writing and memorization become antiquated ways of building knowledge.
From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.
DC: What?!? Wow. I should have seen this coming. I can see positives & negatives here. Virtual meetings could become a bit more creative/fun. But apps could be a bit scarier in some instances, such as with #telelegal.
I really hate to be that guy, but AI is going to be transformative as a teacher. ?
I asked AI the following:
“Plan three lessons to explain how volcanoes are formed. Each lesson needs an introductory activity, information input, a student task and a plenary.”
I pasted the Volcano Lesson 1 content in, and it generated eight slides in under a minute. It's very basic, but for bare bones deckbuilding, it's got such potential. https://t.co/F9CEp9jd75 ? 2/3
From DSC:
Check this confluence of emerging technologies out!
Natural language interfaces have truly arrived. Here’s ChatARKit: an open source demo using #chatgpt to create experiences in #arkit. How does it work? Read on. (1/) pic.twitter.com/R2pYKS5RBq
How to spot AI-generated text— from technologyreview.com by Melissa Heikkilä The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Excerpt:
This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine? …
“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.
…
“A typo in the text is actually a really good indicator that it was human-written,” she adds.
As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022:
Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?
And what does it mean for professors if the answer to those questions is “yes”?
…
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.
“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.
And on the legal side of things:
In the legal education context, I’ve been playing around with generating fact patterns and short documents to use in exercises.
This month’s news has been overshadowed by the implosion of SBF’s TFX and the possible implosion of Elon Musk’s Twitter. All the noise doesn’t mean that important things aren’t happening. Many companies, organizations, and individuals are wrestling with the copyright implications of generative AI. Google is playing a long game: they believe that the goal isn’t to imitate art works, but to build better user interfaces for humans to collaborate with AI so they can create something new. Facebook’s AI for playing Diplomacy is an exciting new development. Diplomacy requires players to negotiate with other players, assess their mental state, and decide whether or not to honor their commitments. None of these are easy tasks for an AI. And IBM now has a 433 Qubit quantum chip–an important step towards making a useful quantum processor.
Resources for Computer Science Education Week (December 5-11, 2022) — with thanks to Mark Adams for these resources
Per Mark, here are a few resources that are intended to show students how computers can become part of their outside interests as well as in their future careers.
From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment
Speaking of recommended books (but from a more technical perspective this time), also see:
10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team) Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders
OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.
The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.
The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained to perform tasks by watching humans do them.
…
The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically.
“Most language learning software can help with the beginning part of learning basic vocabulary and grammar, but gaining any degree of fluency requires speaking out loud in an interactive environment,” Zwick told TechCrunch in an email interview. “To date, the only way people can get that sort of practice is through human tutors, which can also be expensive, difficult and intimidating.”
Speak’s solution is a collection of interactive speaking experiences that allow learners to practice conversing in English. Through the platform, users can hold open-ended conversations with an “AI tutor” on a range of topics while receiving feedback on their pronunciation, grammar and vocabulary.
It’s one of the top education apps in Korea on the iOS App Store, with over 15 million lessons started annually, 100,000 active subscribers and “double-digit million” annual recurring revenue.
If you last checked in on AI image makers a month ago & thought “that is a fun toy, but is far from useful…” Well, in just the last week or so two of the major AI systems updated.
You can now generate a solid image in one try. For example, “otter on a plane using wifi” 1st try: pic.twitter.com/DhiYeVMEEV
So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.
A quick comment from DSC: Possibly, at least in some cases. But I’ve seen enough home-grown, poorly-designed graphics and logos to make me wonder if that will be the case.
How to Teach With Deep Fake Technology — from techlearning.com by Erik Ofgang Despite the scary headlines, deep fake technology can be a powerful teaching tool
Excerpt:
The very concept of teaching with deep fake technology may be unsettling to some. After all, deep fake technology, which utilizes AI and machine learning and can alter videos and animate photographs in a manner that appears realistic, has frequently been covered in a negative light. The technology can be used to violate privacy and create fake videos of real people.
However, while these potential abuses of the technology are real and concerning that doesn’t mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
From DSC: I’m still not sure about this one…but I’ll try to be open to the possibilities here.
Recently, we spoke with three more participants of the AI Explorations program to learn about its ongoing impact in K-12 classrooms. Here, they share how the program is helping their districts implement AI curriculum with an eye toward equity in the classroom.
A hitherto stealth legal AI startup emerged from the shadows today with news via TechCrunch that it has raised $5 million in funding led by the startup fund of OpenAI, the company that developed advanced neural network AI systems such as GPT-3 and DALL-E 2.
The startup, called Harvey, will build on the GPT-3 technology to enable lawyers to create legal documents or perform legal research by providing simple instructions using natural language.
The company was founded by Winston Weinberg, formerly an associate at law firm O’Melveny & Myers, and Gabriel Pereyra, formerly a research scientist at DeepMind and most recently a machine learning engineer at Meta AI.
A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argues that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.
…
Programmers have, of course, always studied, learned from, and copied each other’s code. But not everyone is sure it is fair for AI to do the same, especially if AI can then churn out tons of valuable code itself, without respecting the source material’s license requirements. “As a technologist, I’m a huge fan of AI ,” Butterick says. “I’m looking forward to all the possibilities of these tools. But they have to be fair to everybody.”
Whatever the outcome of the Copilot case, Villa says it could shape the destiny of other areas of generative AI. If the outcome of the Copilot case hinges on how similar AI-generated code is to its training material, there could be implications for systems that reproduce images or music that matches the style of material in their training data.
Also related to AI and art/creativity from Wired.com, see:
Picture Limitless Creativity at Your Fingertips— by Kevin Kelly Artificial intelligence can now make better art than most humans. Soon, these engines of wow will transform how we design just about everything.
Who Will Own the Art of the Future? — by Jessica Rizzo OpenAI has announced that it’s granting Dall-E users the right to commercialize their art. For now.
A group of professors at Massachusetts Institute of Technology dropped a provocative white paper in September that proposed a new kind of college that would address some of the growing public skepticism of higher education. This week, they took the next step toward bringing their vision from idea to reality.
That next step was holding a virtual forum that brought together a who’s who of college innovation leaders, including presidents of experimental colleges, professors known for novel teaching practices and critical observers of the higher education space.
The MIT professors who authored the white paper tried to make clear that even though they’re from an elite university, they do not have all the answers. Their white paper takes pains to describe itself as a draft framework and to invite input from players across the education ecosystem so they can revise and improve the plan.
The goal of this document is simply to propose some principles and ideas that we hope will lay the groundwork for the future, for an education that will be both more affordable and more effective. … Promotions and titles will be much more closely tied to educational performance—quality, commitment, outcomes, and innovation—than to research outcomes.
These are the most important AI trends, according to top AI experts— from nexxworks.com Somewhat in the shadow of the (often) overhyped metaverse and Web3 paradigms, AI seems to be developing at great speed. That’s why we asked a group of top AI experts in our network to describe what they think are the most important trends, evolutions and areas of interest of the moment in that domain.
Excerpt:
All of them have different backgrounds and areas of expertise, but some patterns still emerged in their stories, several of them mentioning ethics, the impact on the climate (both positively and negatively), the danger of overhyping, the need for transparency and explainability, interdisciplinary collaborations, robots and the many challenges that still need to be overcome.
Protein structures can be predicted using genetic data
Recognizing how climate change affects cities and regions
Analyzing astronomical data
AI in science examples
Interpreting social history with archival data
Using satellite images to aid in conservation
Understanding complex organic chemistry
Conclusion
Also relevant/see:
How ‘Responsible AI’ Is Ethically Shaping Our Future— from learningsolutionsmag.com by Markus Bernhardt Excerpt:
The PwC 2022 AI Business Survey finds that “AI success is becoming the rule, not the exception,” and, according to PwC US, published in the 2021 AI Predictions & 2021 Responsible AI Insights Report, “Responsible AI is the leading priority among industry leaders for AI applications in 2021, with emphasis on improving privacy, explainability, bias detection, and governance.”
As the founder of a technology investment firm, I’ve seen firsthand just how much AI has advanced in such a short period of time. The underlying building blocks of the technology are getting astonishingly better at an exponential rate, far outpacing our expectations. Techniques like deep learning allow us to run complex AI models to solve the most difficult problems. But while those who work in technology-centric careers are aware of AI’s explosive capabilities, the public at large is still largely unaware of the depth of AI’s potential.
Enterprise functions such as marketing, sales, finance and HR are all areas that can utilize new AI-enabled applications; these applications include providing customers with 24/7 financial guidance, predicting and assessing loan risks and collecting and analyzing client data.
Let’s explore some real-life artificial intelligence applications.
Using Artificial Intelligence for Navigation
Marketers Use Artificial Intelligence to Increase Their Efficiency
The use of Artificial Intelligence in robotics
Gaming and Artificial Intelligence
Incorporating Artificial Intelligence into Lifestyles
Artificial intelligence (AI): 7 roles to prioritize now — from enterprisersproject.com by Marc Lewis; with thanks to Mr. Stephen Downes for this resource Which artificial intelligence (AI) jobs are hottest now? Consider these seven AI/ML roles to prioritize in your organization
While these seven AI roles are critical, finding talent to fill them is difficult. AI, machine learning, and data analytics are new fields, and few people have relevant experience.
This leads us back to the fact: We are dealing with a Great Reallocation of the labor force to an AI/Machine learning, data-driven world.
3 ways AI is scaling helpful technologies worldwide — from blog.google by Jeff Dean Decades of research have led to today’s rapid progress in AI. Today, we’re announcing three new ways people are poised to benefit.
Excerpts:
Supporting 1,000 languages with AI
Empowering creators and artists with AI
Addressing climate change and health challenges with AI
Maintaining a separate category for AI is getting difficult. We’re seeing important articles about AI infiltrating security, programming, and almost everything else; even biology. That sounds like a minor point, but it’s important: AI is eating the world. What does it mean when an AI system can reconstruct what somebody wants to say from their brainwave? What does it mean when cultured brain cells can be configured to play Pong? They don’t play well, but it’s not long since that was a major achievement for AI.
The creators of StableDiffusion have announced HarmonyAI, a community for building AI tools for generating music. They have released an application called Dance Diffusion.
Get Ready to Relearn How to Use the Internet — from bloomberg.com by Tyle Cowen; with thanks to Sam DeBrule for this resource Everyone knows that an AI revolution is coming, but no one seems to realize how profoundly it will change their day-to-day life.
Excerpts:
This year has brought a lot of innovation in artificial intelligence, which I have tried to keep up with, but too many people still do not appreciate the import of what is to come. I commonly hear comments such as, “Those are cool images, graphic designers will work with that,” or, “GPT-3 is cool, it will be easier to cheat on term papers.” And then they end by saying: “But it won’t change my life.”
This view is likely to be proven wrong — and soon, as AI is about to revolutionize our entire information architecture. You will have to learn how to use the internet all over again.
…
Change is coming. Consider Twitter, which I use each morning to gather information about the world. Less than two years from now, maybe I will speak into my computer, outline my topics of interest, and somebody’s version of AI will spit back to me a kind of Twitter remix, in a readable format and tailored to my needs.
The AI also will be not only responsive but active. Maybe it will tell me, “Today you really do need to read about Russia and changes in the UK government.” Or I might say, “More serendipity today, please,” and that wish would be granted.
Of course all this is just one man’s opinion. If you disagree, in a few years you will be able to ask the new AI engines what they think.
In this blog, we introduce an important natural language understanding (NLU) capability called Natural Language Assessment (NLA), and discuss how it can be helpful in the context of education. While typical NLU tasks focus on the user’s intent, NLA allows for the assessment of an answer from multiple perspectives. In situations where a user wants to know how good their answer is, NLA can offer an analysis of how close the answer is to what is expected. In situations where there may not be a “correct” answer, NLA can offer subtle insights that include topicality, relevance, verbosity, and beyond. We formulate the scope of NLA, present a practical model for carrying out topicality NLA, and showcase how NLA has been used to help job seekers practice answering interview questions with Google’s new interview prep tool, Interview Warmup.
A startup that provides AI-powered translation is working with the National Weather Service to improve language translations of extreme weather alerts across the U.S.
When I’ve been doing this with GPT-3, a 175 billion parameter language model, it has been uncanny how much it reminds me of blogging. When I’m writing this, from March through August 2022, large language models are not yet as good at responding to my prompts as the readers of my blog. But their capacity is improving fast and the prices are dropping.
Soon everyone can have an alien intelligence in their inbox.