Intentional Teaching — from intentionalteaching.buzzsprout.com by Derek Bruff Rethinking Teaching in an Age of AI with James M. Lang and Michelle D. Miller
Excerpt:
In her 2022 book Remembering and Forgetting in the Age of Technology, Michelle D. Miller writes about the “moral panics” that often happen in response to new technologies. In his 2013 book Cheating Lessons: Learning from Academic Dishonesty, James M. Lang argues that the best way to reduce cheating is through better course design. What do these authors have to say about teaching in an age of generative AI tools like ChatGPT? Lots!
Governance of superintelligence — from openai.com Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
AI is developing rapidly enough and the dangers it may pose are clear enough that OpenAI’s leadership believes that the world needs an international regulatory body akin to that governing nuclear power — and fast. But not too fast. In a post to the company’s blog, OpenAI founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever explain that the pace of innovation in artificial intelligence is so fast that we can’t expect existing authorities to adequately rein in the technology. While there’s a certain quality of patting themselves on the back here, it’s clear to any impartial observer that the tech, most visibly in OpenAI’s explosively popular ChatGPT conversational agent, represents a unique threat as well as an invaluable asset.
OpenAI-backed robot startup beats Elon Musk’s Tesla, deploys AI-enabled robots in real world — from firstpost.com by Mehul Reuben Das; via The Rundown A robotics startup backed by OpenAI, the makers of ChatGPT has beaten Elon Musk’s Tesla in the humanoid robots race, and has successfully deployed humanoid robots as security guards. Next, they will be deploying the robots in hospices and assisted living facilities
A robotics startup backed by OpenAI, the makers of ChatGPT has beaten Elon Musk’s Tesla in the humanoid robots race, and has successfully deployed humanoid robots as security guards. Next, they will be deploying the robots in hospices and assisted living facilities.
From DSC: Hmmm…given the crisis of loneliness in the United States, I’m not sure that this type of thing is a good thing. But I’m sure there are those who would argue the other side of this.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fLpic.twitter.com/Mw9FMpKq91
Introducing the ChatGPT app for iOS— from openai.com The ChatGPT app syncs your conversations, supports voice input, and brings our latest model improvements to your fingertips.
Excerpt:
Since the release of ChatGPT, we’ve heard from users that they love using ChatGPT on the go. Today, we’re launching the ChatGPT app for iOS.
The ChatGPT app is free to use and syncs your history across devices. It also integrates Whisper, our open-source speech-recognition system, enabling voice input. ChatGPT Plus subscribers get exclusive access to GPT-4’s capabilities, early access to features and faster response times, all on iOS.
A few episodes back, we presented Tristan Harris and Aza Raskin’s talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.
The talk resonated – over 1.6 million people have viewed it on YouTube as of this episode’s release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.
However, now that so many people have watched or listened to the talk, we’ve found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions.
The State of Voice Technology in 2023 — from deepgram.com; with thanks to The Rundown for this resource Explore the latest insights on speech AI applications and automatic speech recognition (ASR) across a dozen industries, as seen by 400 business leaders surveyed for this report by Opus Research.
Your guide to AI: May 2023 — from nathanbenaich.substack.com by Nathan Benaich and Othmane Sebbouh Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI research (particularly for this issue!), industry, geopolitics and startups during April 2023.
The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.
Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fLpic.twitter.com/Mw9FMpKq91
Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam Let me walk you through my current thoughts on where we are, and where we are going.
From DSC: I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.
Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick We have more agency over the future of AI than we think.
Excerpt (emphasis DSC):
Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?
Also relevant/see:
That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.
Google is in trouble.
I got early ‘Alpha’ access to GPT-4 with browsing and ran some tests.
There’s a remarkable disconnect between how professors and administrators think students use generative AI on written work and how we actually use it. Many assume that if an essay is written with the help of ChatGPT, there will be some sort of evidence — it will have a distinctive “voice,” it won’t make very complex arguments, or it will be written in a way that AI-detection programs will pick up on. Those are dangerous misconceptions. In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own.
The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. You can hand ChatGPT a prompt and ask it for a finished product, but you’ll probably get an essay with a very general claim, middle-school-level sentence structure, and half as many words as you wanted. The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step.
.
From DSC: The idea of personalized storytelling is highly intriguing to me. If you write a story for someone with their name and character in it, they will likely be even more engaged with the story/content. Our daughter recently did this with a substitute teacher, who she really wanted to thank before she left (for another assignment at another school). I thought it was very creative of her.
We’re building resources to teach AI literacies for high school and college instructors and assembling them into a full curriculum that will be deployed in a course with the National Educational Equity Lab offered in Fall 2023. .
AI video is getting insanely powerful.
Soon, you’ll be able to create a Hollywood-grade movie from your pocket.
Here’s the most breathtaking AI-generated videos I’ve seen:
— The AI Solopreneur (@aisolopreneur) May 11, 2023
ChatGPT has changed the world.
It does lack in some areas, but my favorite use case is leveraging it to teach me things twice as fast.
Here are the 10 best prompts to learn anything faster:
Why I’m Excited About ChatGPT— from insidehighered.com by Jennie Young Here are 10 ways ChatGPT will be a boon to first-year writing instruction, Jennie Young writes.
Excerpt:
But from my perspective as a first-year writing program director, I’m excited about how this emerging technology will help students from all kinds of educational backgrounds learn and focus on higher-order thinking skills faster. Here are 10 reasons I’m excited about ChatGPT.
? stfu and take my money
This is the most impressive campaign I’ve seen from a mega brand so far using AI & StableDiffusion.
edX Debuts Two AI-Powered Learning Assistants Built on ChatGPT — from press.edx.org; with thanks to Matthew Tower for this resource edX plugin launches in ChatGPT plugin store to give users access to content and course discovery edX Xpert delivers AI-powered learning and customer support within the edX platform
Excerpt:
LANHAM, Md. – May 12, 2023 – edX, a leading global online learning platform from 2U, Inc. (Nasdaq: TWOU), today announced the debut of two AI-powered innovations: the new edX plugin for ChatGPT and edX Xpert, an AI-powered learning assistant on the edX platform. Both tools leverage the technology of AI research and deployment company OpenAI to deliver real-time academic support and course discovery to help learners achieve their goals.
Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.
In particular, we’re betting on four trends for AI and L&D.
Rapid content production
Personalized content
Detailed, continuous feedback
Learner-driven exploration
In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.
…
Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.
Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.
With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.
NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.
Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.
The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.
Also relevant to the item from Nvidia (above), see:
This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.
“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”
Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.
WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.
The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.
Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
***
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
What Is Agent Assist? — from blogs.nvidia.com Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.
Excerpt:
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.
It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.
From DSC: Is this type of thing going to provide a learning assistant/agent as well?
AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.
One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.
In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.
Microsoft has launched a GPT-4 enhanced Edge browser.
By integrating OpenAI’s GPT-4 technology with Microsoft Edge, you can now use ChatGPT as a copilot in your Bing browser. This delivers superior search results, generates content, and can even transform your copywriting skills (read on to find out how).
Benefits mentioned include: Better Search, Complete Answers, and Creative Spark.
The new interactive chat feature means you can get the complete answer you are looking for by refining your search by asking for more details, clarity, and ideas.
From DSC: I have to say that since the late 90’s, I haven’t been a big fan of web browsers from Microsoft. (I don’t like how Microsoft unfairly buried Netscape Navigator and the folks who had out-innovated them during that time.) As such, I don’t use Edge so I can’t fully comment on the above article.
But I do have to say that this is the type of thing that may make me reevaluate my stance regarding Microsoft’s browsers. Integrating GPT-4 into their search/chat functionalities seems like it would be a very solid, strategic move — at least as of late April 2023.
Speaking of new items coming from Microsoft, also see:
[On 4/27/23], Microsoft Designer, Microsoft’s AI-powered design tool, launched in public preview with an expanded set of features.
Announced in October, Designer is a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. It leverages user-created content and DALL-E 2, OpenAI’s text-to-image AI, to ideate designs, with drop-downs and text boxes for further customization and personalization.
…
Designer will remain free during the preview period, Microsoft says — it’s available via the Designer website and in Microsoft’s Edge browser through the sidebar. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate.
From DSC: I don’t think all students hate AI. My guess is that a lot of them like AI and are very intrigued by it. The next generation is starting to see its potential — for good and/or for ill.
One of the comments (from the above item) said to check out the following video. I saw one (or both?) of these people on a recent 60 Minutes piece as well.
No better way for Judges to learn about both how AI could improve courts and the risks of AI (e.g., deepfakes) than to experiment with it.
Check out the AI avatars that @Judgeschlegel created: https://t.co/UqbJ2PHA09#AI4Law#Law4AI
Most reported feeling the justice system was “unfair,” and many described a sense of “the odds being stacked against them.”
Advocates say the rising number of lawyer-free litigants is problematic. The legal system is meant to be adversarial — with strong lawyers on each side — but the high rate of self-representation creates lopsided justice, pitting an untrained individual against a professional.
AI will likely make lawyer’s jobs easier (or, at least, more interesting) for some tasks, however the effects it may have on the legal profession could be the real legacy of the technology. Schafer pointed to its potential to improve access to justice for people who want legal representation but can’t get it for whatever reason.
This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.
No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.
Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?
The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.
Speaking of the ASU+GSV Summit, see this posting from Michael Moe:
Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…
High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.
ChatGPT, Bing Chat, Google’s Bard—AI is infiltrating the lives of billions.
The 1% who understand it will run the world.
Here’s a list of key terms to jumpstart your learning:
Being “good at prompting” is a temporary state of affairs.The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.
…
The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.
From DSC: Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.
From DSC: Bizarre…at least for me in late April of 2023:
FaceTiming live with AI… This app came across the @ElunaAI Discord and I was very impressed with its responsiveness, natural expression and language, etc…
Feels like the beginning of another massive wave in consumer AI products.
The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.
The Need for AI PD — from techlearning.com by Erik Ofgang Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.
“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.” This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI.
At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.
Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.
From DSC: Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
A New Era for Education — from linkedin.com by Amit Sevak, CEO of ETS and Timothy Knowles, President of the Carnegie Foundation for the Advancement of Teaching
Excerpt (emphasis DSC):
It’s not every day you get to announce a revolution in your sector. But today, we’re doing exactly that. Together, we are setting out to overturn 117 years of educational tradition. … The fundamental assumption [of the Carnegie Unit] is that time spent in a classroom equals learning. This formula has the virtue of simplicity. Unfortunately, a century of research tells us that it’s woefully inadequate.
From DSC: It’s more than interesting to think that the Carnegie Unit has outlived its usefulness and is breaking apart. In fact, the thought is very profound.
If that turns out to be the case, the ramifications will be enormous and we will have the opportunity to radically reinvent/rethink/redesign what our lifelong learning ecosystems will look like and provide.
So I appreciate what Amit and Timothy are saying here and I appreciate their relaying what the new paradigm might look like. It goes with the idea of using design thinking to rethink how we build/reinvent our learning ecosystems. They assert:
It’s time to change the paradigm. That’s why ETS and the Carnegie Foundation have come together to design a new future of assessment.
Whereas the Carnegie Unit measures seat time, the new paradigm willmeasureskills—with a focus on the ones we know are most important for success in career and in life.
Whereas the Carnegie Unit never leaves the classroom, the new paradigm willcapture learning wherever it takes place—whether that is in after-school activities, during a work-experience placement, in an internship, on an apprenticeship, and so on.
Whereas the Carnegie Unit offers only one data point—pass or fail—the new paradigm willgenerate insights throughout the learning process, the better to guide students, families, educators, and policymakers.
I could see this type of information being funneled into peoples’ cloud-based learner profiles — which we as individuals will own and determine who else can access them. I diagrammed this back in January of 2017 using blockchain as the underlying technology. That may or may not turn out to be the case. But the concept will still hold I think — regardless of the underlying technology(ies).
For example, we are seeing a lot more articles regarding things like Comprehensive Learner Records (CLR) or Learning and Employment Records (LER; examplehere), and similar items.
Speaking of reinventing our learning ecosystems, also see:
The NVIDIA research team just dropped a new research paper on creating high-quality short videos from text prompts. This technique uses Video Latent Diffusion Models (Video LDMs), which work efficiently without using too much computing power.
It can create 113 frame-long videos at 1280×2048 resolution, rendered at 24 FPS, resulting in 4.7-second clips. The team first trained the model on images, then added a time dimension to make it work with videos.
This new research is impressive. At the current pace of development, we may soon be able to generate full-length movies from just a handful of text prompts within the next few years.