AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey
The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.

ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.

Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.

There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.

 

 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 


Information Age vs Generation Age Technologies for Learning — from opencontent.org by David Wiley

Remember (emphasis DSC)

  • the internet eliminated time and place as barriers to education, and
  • generative AI eliminates access to expertise as a barrier to education.

Just as instructional designs had to be updated to account for all the changes in affordances of online learning, they will need to be dramatically updated again to account for the new affordances of generative AI.


The Curious Educator’s Guide to AI | Strategies and Exercises for Meaningful Use in Higher Ed  — from ecampusontario.pressbooks.pub by Kyle Mackie and Erin Aspenlieder; via Stephen Downes

This guide is designed to help educators and researchers better understand the evolving role of Artificial Intelligence (AI) in higher education. This openly-licensed resource contains strategies and exercises to help foster an understanding of AI’s potential benefits and challenges. We start with a foundational approach, providing you with prompts on aligning AI with your curiosities and goals.

The middle section of this guide encourages you to explore AI tools and offers some insights into potential applications in teaching and research. Along with exposure to the tools, we’ll discuss when and how to effectively build AI into your practice.

The final section of this guide includes strategies for evaluating and reflecting on your use of AI. Throughout, we aim to promote use that is effective, responsible, and aligned with your educational objectives. We hope this resource will be a helpful guide in making informed and strategic decisions about using AI-powered tools to enhance teaching and learning and research.


Annual Provosts’ Survey Shows Need for AI Policies, Worries Over Campus Speech — from insidehighered.com by Ryan Quinn
Many institutions are not yet prepared to help their faculty members and students navigate artificial intelligence. That’s just one of multiple findings from Inside Higher Ed’s annual survey of chief academic officers.

Only about one in seven provosts said their colleges or universities had reviewed the curriculum to ensure it will prepare students for AI in their careers. Thuswaldner said that number needs to rise. “AI is here to stay, and we cannot put our heads in the sand,” he said. “Our world will be completely dominated by AI and, at this point, we ain’t seen nothing yet.”


Is GenAI in education more of a Blackberry or iPhone? — from futureofbeinghuman.com by Andrew Maynard
There’s been a rush to incorporate generative AI into every aspect of education, from K-12 to university courses. But is the technology mature enough to support the tools that rely on it?

In other words, it’s going to mean investing in concepts, not products.

This, to me, is at the heart of an “iPhone mindset” as opposed to a “Blackberry mindset” when it comes to AI in education — an approach that avoids hard wiring in constantly changing technologies, and that builds experimentation and innovation into the very DNA of learning.

For all my concerns here though, maybe there is something to being inspired by the Blackberry/iPhone analogy — not as a playbook for developing and using AI in education, but as a mindset that embraces innovation while avoiding becoming locked in to apps that are detrimentally unreliable and that ultimately lead to dead ends.


Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays — from sciencedirect.com by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller

Highlights

  • Randomized-controlled experiments investigating novice and experienced teachers’ ability to identify AI-generated texts.
  • Generative AI can simulate student essay writing in a way that is undetectable for teachers.
  • Teachers are overconfident in their source identification.
  • AI-generated essays tend to be assessed more positively than student-written texts.

Can Using a Grammar Checker Set Off AI-Detection Software? — from edsurge.com by Jeffrey R. Young
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?


Use artificial intelligence to get your students thinking critically — from timeshighereducation.com by Urbi Ghosh
When crafting online courses, teaching critical thinking skills is crucial. Urbi Ghosh shows how generative AI can shape how educators can approach this


ChatGPT shaming is a thing – and it shouldn’t be — from futureofbeinghuman.com by Andrew Maynard
There’s a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it’s a problem.

Excerpt (emphasis DSC):

This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.

From DSC:
Yes…that sounds like how many faculty members viewed online learning, even though they had never taught online before.

 

Are Colleges Ready For an Online-Education World Without OPMs? — from edsurge.com by Robert Ubell (Columnist)
Online Program Management companies have helped hundreds of colleges build online degree programs, but the sector is showing signs of strain.

For more than 15 years, a group of companies known as Online Program Management providers, or OPMs, have been helping colleges build online degree programs. And most of them have relied on an unusual arrangement — where the companies put up the financial backing to help colleges launch programs in exchange for a large portion of tuition revenue.

As a longtime administrator of online programs at colleges, I have mixed feelings about the idea of shutting down the model. And the question boils down to this: Are colleges ready for a world without OPMs?


Guy Raz on Podcasts and Passion: Audio’s Ability to Spark Learning — from michaelbhorn.substack.com by Michael B. Horn

This conversation went in a bunch of unexpected directions. And that’s what’s so fun about it. After all, podcasting is all about bringing audio back and turning learning into leisure. And the question Guy and his partner Mindy Thomas asked a while back was: Why not bring kids in on the fun? Guy shared how his studio, Tinkercast, is leveraging the medium to inspire and educate the next generation of problem solvers.

We discussed the power of audio to capture curiosities and foster imagination, how Tinkercast is doing that in and out of the classroom, and how it can help re-engage students in building needed skills at a critical time. Enjoy!



April 2024 Job Cuts Announced by US-Based Companies Fall; More Cuts Attributed to TX DEI Law, AI in April — from challengergray.com

Excerpt (emphasis DSC):

Education
Companies in the Education industry, which includes schools and universities, cut the second-most jobs last month with 8,092 for a total of 17,892. That is a 635% increase from the 2,435 cuts announced during the first four months of 2023.

“April is typically the time school districts are hiring and setting budgets for the next fiscal year. Certainly, there are budgetary constraints, as labor costs rise, but school systems also have a retention and recruitment issue,” said Challenger.


Lifetime college returns differ significantly by major, research finds — from highereddive.com by Lilah Burke
Engineering and computer science showed the best return out of 10 fields of study that were examined.

Dive Brief:

  • The lifetime rate of return for a college education differs significantly by major, but it also varies by a student’s gender and race or ethnicity, according to new peer-reviewed research published in the American Educational Research Journal.
  • A bachelor’s degree in general provides a roughly 9% rate of return for men, and nearly 10% for women, researchers concluded. The majors with the best returns were computer science and engineering.
  • Black, Hispanic and Asian college graduates had slightly higher rates of return than their White counterparts, the study found.
 

ChatGPT remembers who you are — from thebrainyacts.beehiiv.com |Brainyacts #191

OpenAI rolls out Memory feature for ChatGPT
OpenAI has introduced a cool update for ChatGPT (rolling out to paid and free users – but not in the EU or Korea), enabling the AI to remember user-specific details across sessions. This memory feature enhances personalization and efficiency, making your interactions with ChatGPT more relevant and engaging.

.

Key Features

  1. Automatic Memory Tracking
    • ChatGPT now automatically records information from your interactions such as preferences, interests, and plans. This allows the AI to refine its responses over time, making each conversation increasingly tailored to you.
  2. Enhanced Personalization
    • The more you interact with ChatGPT, the better it understands your needs and adapts its responses accordingly. This personalization improves the relevance and efficiency of your interactions, whether you’re asking for daily tasks or discussing complex topics.
  3. Memory Management Options
    • You have full control over this feature. You can view what information is stored, toggle the memory on or off, and delete specific data or all memory entries, ensuring your privacy and preferences are respected.




From DSC:
The ability of AI-based applications to remember things about us will have major and positive ramifications for us when we think about learning-related applications of AI.


 

Shares of two big online education stocks tank more than 10% as students use ChatGPT — from cnbc.com by Michelle Fox; via Robert Gibson on LinkedIn

The rapid rise of artificial intelligence appears to be taking a toll on the shares of online education companies Chegg and Coursera.

Both stocks sank by more than 10% on Tuesday after issuing disappointing guidance in part because of students using AI tools such as ChatGPT from OpenAI.



Synthetic Video & AI Professors — from drphilippahardman.substack.com by Dr. Philippa Hardman
Are we witnessing the emergence of a new, post-AI model of async online learning?

TLDR: by effectively tailoring the learning experience to the learner’s comprehension levels and preferred learning modes, AI can enhance the overall learning experience, leading to increased “stickiness” and higher rates of performance in assessments.

TLDR: AI enables us to scale responsive, personalised “always on” feedback and support in a way that might help to solve one of the most wicked problems of online async learning – isolation and, as a result, disengagement.

In the last year we have also seen the rise of an unprecedented number of “always on” AI tutors, built to provide coaching and feedback how and when learners need it.

Perhaps the most well-known example is Khan Academy’s Khanmigo and its GPT sidekick Tutor Me. We’re also seeing similar tools emerge in K12 and Higher Ed where AI is being used to extend the support and feedback provided for students beyond the physical classroom.


Our Guidance on School AI Guidance document has been updated — from stefanbauschard.substack.com by Stefan Bauschard

We’ve updated the free 72-page document we wrote to help schools design their own AI guidance policies.

There are a few key updates.

  1. Inclusion of Oklahoma and significant updates from North Carolina and Washington.
  2. More specifics on implementation — thanks NC and WA!
  3. A bit more on instructional redesign. Thanks to NC for getting this party started!

Creating a Culture Around AI: Thoughts and Decision-Making — from er.educause.edu by Courtney Plotts and Lorna Gonzalez

Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.

 

 

 

Description:

I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.


From DSC:
This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.


 

Smart(er) Glasses: Introducing New Ray-Ban | Meta Styles + Expanding Access to Meta AI with Vision — from meta.com

  • Share Your View on a Video Call
  • Meta AI Makes Your Smart Glasses Smarter
  • All In On AI-Powered Hardware

New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates — from about.fb.com

Takeaways

  • We’re expanding the Ray-Ban Meta smart glasses collection with new styles.
  • We’re adding video calling with WhatsApp and Messenger to share your view on a video call.
  • We’re rolling out Meta AI with Vision, so you can ask your glasses about what you’re seeing and get helpful information — completely hands-free.

 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

Forbes 2024 AI 50 List: Top Artificial Intelligence Startups  — from forbes.com by Kenrick Cai

The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.


Adobe Previews Breakthrough AI Innovations to Advance Professional Video Workflows Within Adobe Premiere Pro — from news.adobe.com

  • New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
  • Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
    Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day
  • AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive

Also relevant see:




 

AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com

AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.

The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.


A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:

Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.

 

AI for the physical world — from superhuman.ai by Zain Kahn

Excerpt: (emphasis DSC)

A new company called Archetype is trying to tackle that problem: It wants to make AI useful for more than just interacting with and understanding the digital realm. The startup just unveiled Newton — “the first foundation model that understands the physical world.”

What’s it for?
A warehouse or factory might have 100 different sensors that have to be analyzed separately to figure out whether the entire system is working as intended. Newton can understand and interpret all of the sensors at the same time, giving a better overview of how everything’s working together. Another benefit: You can ask Newton questions in plain English without needing much technical expertise.

How does it work?

  • Newton collects data from radar, motion sensors, and chemical and environmental trackers
  • It uses an LLM to combine each of those data streams into a cohesive package
  • It translates that data into text, visualizations, or code so it’s easy to understand

Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data — from venturebeat.com by Michael Nuñez; via Tom Barrett’s Prompcraft e-newsletter

Apple has entered into a significant agreement with stock photography provider Shutterstock to license millions of images for training its artificial intelligence models. According to a Reuters report, the deal is estimated to be worth between $25 million and $50 million, placing Apple among several tech giants racing to secure vast troves of data to power their AI systems.


 

 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 
© 2025 | Daniel Christian