From DSC:
Interesting fodder for thought, for sure. Thanks Stefan.
Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398
Photo-realistic avatars show future of Metaverse communication — from inavateonthenet.net
Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.
Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.
The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.
From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:
What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.
Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.
That got me to thinking…
What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes?
That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.
The end result would be that the main points were properly communicated/learned/received.
What value do you offer? — from linkedin.com by Dan Fitzpatrick — The AI Educator
Excerpt (emphasis DSC):
So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:
- What value do we offer to our students?
- What value will they need to offer to the world?
- How are we preparing them to offer that value?
The answers to these questions are crucial, and they will redefine the trajectory of our education system.
We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.
5 Little-Known ChatGPT Prompts to Learn Anything Faster — from medium.com by Eva Keiffenheim
Including templates, you can copy.
Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.
As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.
That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.
Midjourney AI Art for Teachers (for any kind of teacher, not just Art Teachers) — from The AI Educator on YouTube by Dan Fitzpatrick
From DSC:
This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!
Also see Dan’s
AI Generated Immersive Learning Series
What is Academic Integrity in the Era of Generative Artificial intelligence? — from silverliningforlearning.org by Chris Dede
In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.
Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.
Dr. Chris Dede and the Necessity of Training Students and Faculty to Improve Their Human Judgment and Work Properly with AIs — from stefanbauschard.substack.com by Stefan Bauschard
We need to stop using test-driven curriculums that train students to listen and to compete against machines, a competition they cannot win. Instead, we need to help them augment their Judgment.
The Creative Ways Teachers Are Using ChatGPT in the Classroom — from time.com by Olivia B. Waxman
Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.
…
Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.
AI in Higher Education: Aiding Students’ Academic Journey — from td.org by J. Chris Brown
Topics/sections include:
Automatic Grading and Assessment
AI-Assisted Student Support Services
Intelligent Tutoring Systems
AI Can Help Both Students and Teachers
Shockwaves & Innovations: How Nations Worldwide Are Dealing with AI in Education — from the74million.org by Robin Lake
Lake: Other countries are quickly adopting artificial intelligence in schools. Lessons from Singapore, South Korea, India, China, Finland and Japan.
I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.
Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.
AI in Education — from educationnext.org by John Bailey
The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
Generative AI and education futures — from ucl.ac.uk
Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.
Bringing AI Literacy to High Schools — from by Nikki Goth Itoi
Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.
To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.
Teaching Assistants that Actually Assist Instructors with Teaching — from opencontent.org by David Wiley
“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”
Assignment Makeovers in the AI Age: Reading Response Edition — from derekbruff.org by Derek Bruff
For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.
What about option two? I think one version of this would be to do away with the reading response assignment altogether.
4 Steps to Help You Plan for ChatGPT in Your Classroom — from chronicle.com by Flower Darby
Why you should understand how to teach with AI tools — even if you have no plans to actually use them.
Some items re: AI in other areas:
15 Generative AI Tools A billion+ people will be collectively using very soon. I use most of them every day — from stefanbauschard.substack.com by Stefan Bauschard
ChatGPT, Bing, Office Suite, Google Docs, Claude, Perplexity.ai, Plug-Ins, MidJourney, Pi, Runway, Bard, Bing, Synthesia, D-ID
The Future of AI in Video: a look forward — from provideocoalition.com by Iain Anderson
Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster; resource from Tom Barrett
Along these lines of Hollywood and AI, see this Tweet:
Hollywood is BROKEN.
Striking alone won’t fix it.
A whole new business model is required? pic.twitter.com/qmRyw5FFUj
— Misha (@mishadavinci) July 16, 2023
Claude 2: ChatGPT rival launches chatbot that can summarise a novel –from theguardian.com by Dan Milmo; resource from Tom Barrett
Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing
Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal
MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.
Google’s medical AI chatbot is already being tested in hospitals — from theverge.com by Wes Davis; resource via GSV
Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci
From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground
The economic potential of generative AI — from mckinsey.com; via Superhuman
.
DC: It should prove to be interesting & fun to watch how #AI and #XR related technologies will be integrated into games & the #gamification of #learning .https://t.co/HO2CftqNrs via @VRScout
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears — from oneusefulthing.org by Ethan Mollick
AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Ethan Mollick
5 Steps to Transforming Images into Videos Using AI Tools — from heatherbcooper.substack.com by Heather Cooper
A simple guide to layering AI tools for quick video creation
.
‘Nobody wins in an academic-integrity arms race’ — from chonicle.com by Ian Wilhelm
How artificial intelligence is changing the way college thing about cheating
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI — from chonicle.com by Beth McMurtrie and Beckie Supiano
Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
GPT-4 Can Use Tools Now—That’s a Big Deal — from every.to by Dan Shipper; resource via Sam DeBrule
What “function calling” is, how it works, and what it means
Excerpt:
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it.
.
.
How ChatGPT can help disrupt assessment overload — from timeshighereducation.com by David Carless
Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning.
.
.
The full 12 uses are here: https://edgeoflearning.com/your-new-teaching-superpower-ai-tools/
The AI Tools in Education Database — from kiwi-path-612.notion.site by EdTech Insiders
Excerpt (emphasis DSC):
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day. This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
.
.
Time for Class 2023 Study finds students are earlier adopters of generative AI tools than faculty, and majority (69%) of learners prefer hybrid, blended or online course formats — from globenewswire.com by Tyton Partners
.
AI Could Prevent Hiring Bias — Unless It Makes It Worse — from nerdwallet.com by Anna Helhoski
Advocates say AI can eliminate human biases in hiring. Skeptics point out that AI tools are trained by … humans.
Excerpt:
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
Also related to AI in hiring, see:
4 in 10 Companies Will Be Using AI Interviews by 2024 — from resumebuilder.com
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
- 43% of companies already have or plan to adopt AI interviews by 2024
- Two-thirds of this group believe AI interviews will increase hiring efficiency
- 15% say that AI will be used to make decisions on candidates without any human input
- More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com
Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
.
PowerSchool Announces Collaboration with Microsoft Azure OpenAI Service to Provide Personalized Learning at Scale in K-12 Education — from powerschool.com
Large-scale language models integrated within PowerSchool Performance Matters and PowerSchool LearningNav products will empower educators in delivering transformative personalized learning pathways
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…
.
FETC 2023 Virtual Roundtable: How AI Will Transform K-12 Education
AI could be the great equalizer!
Holly Clark
Example screenshots:
We Might Finally Get AI That “Remembers” Us — from theneurondaily.com by Noah Edelman & Pete Huang
Excerpt:
Why it matters: The best AI assistants will be the ones that require the least prompting. They’ll get to know who you are, what you need, and your modus operandi. Profiles are a good starting point, but we believe the game-changer will be larger context windows (that’s nerd-speak for the amount of context ChatGPT can handle).
.
From DSC:
And how about taking this a step further and remembering — or being able to access — our constantly updated Cloud-Based Learning Profiles?
AI-driven Legal Apprenticeships — from thebrainyacts.beehiiv.com by Josh Kubicki
Excerpts:
My hypothesis and research suggest that as bar associations and the ABA begin to recognize the on-going systemic issues of high-cost legal education, growing legal deserts (where no lawyer serves a given population), on-going and pervasive access to justice issues, and a public that is already weary of the legal system – alternative options that are already in play might become more supported.
What might that look like?
The combination of AI-assisted education with traditional legal apprenticeships has the potential to create a rich, flexible, and engaging learning environment. Here are three scenarios that might illustrate what such a combination could look like:
-
- Scenario One – Personalized Curriculum Development
- Scenario Two – On-Demand Tutoring and Mentoring
- Scenario Three – AI-assisted Peer Networks and Collaborative Learning:
Why Companies Are Vastly Underprepared For The Risks Posed By AI — from forbes.com by
Accuracy, bias, security, culture, and trust are some of the risks involved
Excerpt:
We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks,” with just four percent of leaders considering the risk level to be “significant.”
A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.
AI & lawyer training: How law firms can embrace hybrid learning & development — thomsonreuters.com
A big part of law firms’ successful adaptation to the increased use of ChatGPT and other forms of generative AI, may depend upon how firmly they embrace online learning & development tools designed for hybrid work environments
Excerpt:
As law firms move forward in using of advanced artificial intelligence such as ChatGPT and other forms of generative AI, their success may hinge upon how they approach lawyer training and development and what tools they enlist for the process.
One of the tools that some law firms use to deliver a new, multi-modal learning environment is an online, video-based learning platform, Hotshot, that delivers more than 250 on-demand courses on corporate, litigation, and business skills.
Ian Nelson, co-founder of Hotshot, says he has seen a dramatic change in how law firms are approaching learning & development (L&D) in the decade or so that Hotshot has been active. He believes the biggest change is that 10 years ago, firms hadn’t yet embraced the need to focus on training and development.
From DSC:
Heads up law schools. Are you seeing/hearing this!?
- Are we moving more towards a lifelong learning model within law schools?
- If not, shouldn’t we be doing that?
- Are LLM programs expanding quickly enough? Is more needed?
Legal tech and innovation: 3 ways AI supports the evolution of legal ops — from lexology.com
Excerpts:
- Simplified legal spend analysis
- Faster contract review
- Streamlined document management
Last night, Jensen Huang of NVIDIA gave his very first live keynote in 4-years.
The most show-stopping moment from the event was when he showed off the real-time AI in video games. A human speaks, the NPC responds, in real time and the dialogue was generated with AI on the fly. pic.twitter.com/TDoUM1zSiy
— Matt Wolfe (@mreflow) May 29, 2023
From DSC:
And how long before that type of interactivity is embedded into learning-related applications/games?!
Nvidia ($960B) is now worth more than:
– Facebook ($665B)
– Tesla ($618B)
– Netflix ($168B)This is a company that started 30 years ago at Denny’s and was for decades only a video game chip maker.
Here’s why Nvidia is surging: pic.twitter.com/km7pECk4Kw
— Peter Yang (@petergyang) May 27, 2023
AI in Learning: The Impact of ChatGPT on L&D & Workflow Learning — from linkedin.com; this event by Bob Mosher features his conversation with Donald Clark
The future is already here.
The 1% who understand it will run the world.
Here’s a list of 24 top resources to get up to speed (for free):
— Misha (@mishadavinci) May 28, 2023
Bill Gates says AI is poised to destroy search engines and Amazon — from futurism.com by Victor Tangermann
Who will win the AI [competition]? (DSC: I substituted the word competition here, as that’s what it is. It’s not a war, it’s a part of America’s way of doing business.)
“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said during a Goldman Sachs event on AI in San Francisco this week, as quoted by CNBC.
These AI assistants could “read the stuff you don’t have time to read,” he said, allowing users to get to information without having to use a search engine like Google.
EdX launches ChatGPT-powered plugin, learning assistant — from edscoop.com
The online learning firm edX introduced two new tools powered by ChatGPT, the “first of many innovations” in generative AI for the platform.
The online learning platform edX introduced two new tools on Friday based on OpenAI’s ChatGPT technology: an edX plugin for ChatGPT and a learning assistant embedded in the edX platform, called Xpert.
According to the company, its plugin will enable ChatGPT Plus subscribers to discover educational programs and explore learning content such as videos and quizzes across edX’s library of 4,200 courses.
Bing is now the default search for ChatGPT — from theverge.com by Tom Warren; via superhuman.beehiiv.com
The close partnership between Microsoft and OpenAI leads to plug-in interoperability and search defaults.
Excerpt:
OpenAI will start using Bing as the default search experience for ChatGPT. The new search functionality will be rolling out to ChatGPT Plus users today and will be enabled for all free ChatGPT users soon through a plug-in in ChatGPT.
How ChatGPT Could Help or Hurt Students With Disabilities — from chronicle.com by Beth McMurtrie
Excerpt:
- Students with mobility challenges may find it easier to use generative AI tools — such as ChatGPT or Elicit — to help them conduct research if that means they can avoid a trip to the library.
- Students who have trouble navigating conversations — such as those along the autism spectrum — could use these tools for “social scripting.” In that scenario, they might ask ChatGPT to give them three ways to start a conversation with classmates about a group project.
- Students who have trouble organizing their thoughts might benefit from asking a generative AI tool to suggest an opening paragraph for an essay they’re working on — not to plagiarize, but to help them get over “the terror of the blank page,” says Karen Costa, a faculty-development facilitator who, among other things, focuses on teaching, learning, and living with ADHD. “AI can help build momentum.”
- ChatGPT is good at productive repetition. That is a practice most teachers use anyway to reinforce learning. But AI can take that to the next level by allowing students who have trouble processing information to repeatedly generate examples, definitions, questions, and scenarios of concepts they are learning.
It’s not all on you to figure this out and have all the answers. Partner with your students and explore this together.
A new antibiotic, discovered with artificial intelligence, may defeat a dangerous superbug — from edition.cnn.com by Brenda Goodman
8 YouTube Channels to Learn AI — from techthatmatters.beehiiv.com by Harsh Makadia
- The AI Advantage (link)
- Jason West (link)
- TheAIGRID (link)
- Prompt Engineering (link)
- Matt Wolfe (link)
- Two-Minute Papers (link)
- Brett Malinowski (link)
- 10X Income (link)
Artificial Intelligence and the Future of Teaching and Learning | Insights and Recommendations — with thanks to Robert Gibson on LinkedIn for this resource
Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence — from bbc.com by James Clayton
Excerpt:
The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.
Also related to that item, see:
Why artificial intelligence developers say regulation is needed to keep AI in check — from pbs.org
Excerpt:
Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fL pic.twitter.com/Mw9FMpKq91
— OpenAI (@OpenAI) May 12, 2023
Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam
Let me walk you through my current thoughts on where we are, and where we are going.
From DSC:
I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.
Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick
We have more agency over the future of AI than we think.
Excerpt (emphasis DSC):
Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?
Also relevant/see:
That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan
If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.
Google is in trouble.
I got early ‘Alpha’ access to GPT-4 with browsing and ran some tests.
Here are 8 crazy things I found: pic.twitter.com/ndxKGSqlL0
— Rowan Cheung (@rowancheung) May 7, 2023
What Higher Ed Gets Wrong About AI Chatbots — From the Student Perspective — from edsurge.com by Mary Jo Madda (Columnist)
Work Shift: How AI Might Upend Pay — from bloomberg.com by Jo Constantz
Excerpt:
This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.
“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”
Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.
US Supreme Court rejects computer scientist’s lawsuit over AI-generated inventions — from reuters.com by Blake Brittain
Excerpt:
WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.
The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.
Deep learning pioneer Geoffrey Hinton has quit Google — from technologyreview.com by Will Douglas Heaven
Hinton will be speaking at EmTech Digital on Wednesday.
Excerpt:
Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
***
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
What Is Agent Assist? — from blogs.nvidia.com
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.
Excerpt:
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.
It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.
From DSC:
Is this type of thing going to provide a learning assistant/agent as well?
A chatbot that asks questions could help you spot when it makes no sense — from technologyreview.com by Melissa Heikkilä
Engaging our critical thinking is one way to stop getting fooled by lying AI.
Excerpt:
AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.
One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.
Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images — from stability.ai
New AI Powered Denoise in PhotoShop — from jeadigitalmedia.org
In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.
The Edtech Insiders’ Rundown of ASU GSV 2023 — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Excerpt:
A few current categories of AI in Edtech particularly jump out:
- Teacher Productivity and Joy: Tools to make educators’ lives easier (and more fun?) by removing some of the more rote tasks of teaching, like lesson planning (we counted at least 8 different tools for lesson planning), resource curation and data collection.
- Personalization and Learning Delivery: Tools to tailor instruction to the particular interests, learning preferences and preferred media consumption of students. This includes tools that convert text to video, video to text, text to comic books, Youtube to notes, and many more.
- Study and Course Creation Tools: Tools for learners to automatically make quizzes, flashcards, notes or summaries of material, or even to automatically create full courses from a search term.
- AI Tutors, Chatbots and Teachers: There will be no shortage of conversational AI “copilots” (which may take many guises) to support students in almost any learning context. Many Edtech companies launched their own during the conference. Possible differentiators here could be personality, safety, privacy, access to a proprietary or specific data set, or bots built on proprietary LLMs.
- Simplifying Complex Processes: One of the most inspiring conversations of the conference for me was with Tiffany Green, founder of Uprooted Academy, about how AI can and should be used to remove bureaucratic barriers to college for underrepresented students (for example, used to autofill FAFSA forms, College Applications, to search for schools and access materials, etc). This is not the only complex bureaucratic process in education.
- Educational LLMs: The race is on to create usable large language models for education that are safe, private, appropriate and classroom-ready. Merlyn Mind is working on this, and companies that make LLMs are sprouting up in other sectors…
EdTech Is Going Crazy For AI — from joshbersin.com by Josh Bersin
Excerpts:
This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.
No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.
Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?
The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.
Speaking of the ASU+GSV Summit, see this posting from Michael Moe:
EIEIO…Brave New World
By: Michael Moe, CFA, Brent Peus, Owen Ritz
Excerpt:
Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…
***
Also see:
Imagining what’s possible in lifelong learning: Six insights from Stanford scholars at ASU+GSV — from acceleratelearning.stanford.edu by Isabel Sacks
Excerpt:
High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.
ChatGPT, Bing Chat, Google’s Bard—AI is infiltrating the lives of billions.
The 1% who understand it will run the world.
Here’s a list of key terms to jumpstart your learning:
— Misha (@mishadavinci) April 23, 2023
A guide to prompting AI (for what it is worth) — from oneusefulthing.org by Ethan Mollick
A little bit of magic, but mostly just practice
Excerpt (emphasis DSC):
Being “good at prompting” is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.
…
The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.
From DSC:
Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.
From DSC:
Bizarre…at least for me in late April of 2023:
FaceTiming live with AI… This app came across the @ElunaAI Discord and I was very impressed with its responsiveness, natural expression and language, etc…
Feels like the beginning of another massive wave in consumer AI products.
…who’s seen the movie HER? pic.twitter.com/By3dsew91Z
— Roberto Nickson (@rpnickson) April 26, 2023
Excerpt from Lore Issue #28: Drake, Grimes, and The Future of AI Music — from lore.com
Here’s a summary of what you need to know:
- The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
- Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
- Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
- The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.
The Need for AI PD — from techlearning.com by Erik Ofgang
Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.
“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.” This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI.
Coursera Adds ChatGPT-Powered Learning Tools — from campustechnology.com by Kate Lucariello
Excerpt:
At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.
Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.
From DSC:
Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.