AI’s New Conversation Skills Eyed for Education — from insidehighered.com by Lauren Coffey
The latest ChatGPT’s more human-like verbal communication has professors pondering personalized learning, on-demand tutoring and more classroom applications.

ChatGPT’s newest version, GPT-4o ( the “o” standing for “omni,” meaning “all”), has a more realistic voice and quicker verbal response time, both aiming to sound more human. The version, which should be available to free ChatGPT users in coming weeks—a change also hailed by educators—allows people to interrupt it while it speaks, simulates more emotions with its voice and translates languages in real time. It also can understand instructions in text and images and has improved video capabilities.

Ajjan said she immediately thought the new vocal and video capabilities could allow GPT to serve as a personalized tutor. Personalized learning has been a focus for educators grappling with the looming enrollment cliff and for those pushing for student success.

There’s also the potential for role playing, according to Ajjan. She pointed to mock interviews students could do to prepare for job interviews, or, for example, using GPT to play the role of a buyer to help prepare students in an economics course.

 

 

io.google/2024

.


How generative AI expands curiosity and understanding with LearnLM — from blog.google
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

Generative AI is fundamentally changing how we’re approaching learning and education, enabling powerful new ways to support educators and learners. It’s taking curiosity and understanding to the next level — and we’re just at the beginning of how it can help us reimagine learning.

Today we’re introducing LearnLM: our new family of models fine-tuned for learning, based on Gemini.

On YouTube, a conversational AI tool makes it possible to figuratively “raise your hand” while watching academic videos to ask clarifying questions, get helpful explanations or take a quiz on what you’ve been learning. This even works with longer educational videos like lectures or seminars thanks to the Gemini model’s long-context capabilities. These features are already rolling out to select Android users in the U.S.

Learn About is a new Labs experience that explores how information can turn into understanding by bringing together high-quality content, learning science and chat experiences. Ask a question and it helps guide you through any topic at your own pace — through pictures, videos, webpages and activities — and you can upload files or notes and ask clarifying questions along the way.


Google I/O 2024: An I/O for a new generation — from blog.google

The Gemini era
A year ago on the I/O stage we first shared our plans for Gemini: a frontier model built to be natively multimodal from the beginning, that could reason across text, images, video, code, and more. It marks a big step in turning any input into any output — an “I/O” for a new generation.

In this story:


Daily Digest: Google I/O 2024 – AI search is here. — from bensbites.beehiiv.com
PLUS: It’s got Agents, Video and more. And, Ilya leaves OpenAI

  • Google is integrating AI into all of its ecosystem: Search, Workspace, Android, etc. In true Google fashion, many features are “coming later this year”. If they ship and perform like the demos, Google will get a serious upper hand over OpenAI/Microsoft.
  • All of the AI features across Google products will be powered by Gemini 1.5 Pro. It’s Google’s best model and one of the top models. A new Gemini 1.5 Flash model is also launched, which is faster and much cheaper.
  • Google has ambitious projects in the pipeline. Those include a real-time voice assistant called Astra, a long-form video generator called Veo, plans for end-to-end agents, virtual AI teammates and more.

 



New ways to engage with Gemini for Workspace — from workspace.google.com

Today at Google I/O we’re announcing new, powerful ways to get more done in your personal and professional life with Gemini for Google Workspace. Gemini in the side panel of your favorite Workspace apps is rolling out more broadly and will use the 1.5 Pro model for answering a wider array of questions and providing more insightful responses. We’re also bringing more Gemini capabilities to your Gmail app on mobile, helping you accomplish more on the go. Lastly, we’re showcasing how Gemini will become the connective tissue across multiple applications with AI-powered workflows. And all of this comes fresh on the heels of the innovations and enhancements we announced last month at Google Cloud Next.


Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals — from techcrunch.com by Kyle Wiggers

Google is improving its AI-powered chatbot Gemini so that it can better understand the world around it — and the people conversing with it.

At the Google I/O 2024 developer conference on Tuesday, the company previewed a new experience in Gemini called Gemini Live, which lets users have “in-depth” voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbot’s speaking to ask clarifying questions, and it’ll adapt to their speech patterns in real time. And Gemini can see and respond to users’ surroundings, either via photos or video captured by their smartphones’ cameras.


Generative AI in Search: Let Google do the searching for you — from blog.google
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching.


 
 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

ChatGPT remembers who you are — from thebrainyacts.beehiiv.com |Brainyacts #191

OpenAI rolls out Memory feature for ChatGPT
OpenAI has introduced a cool update for ChatGPT (rolling out to paid and free users – but not in the EU or Korea), enabling the AI to remember user-specific details across sessions. This memory feature enhances personalization and efficiency, making your interactions with ChatGPT more relevant and engaging.

.

Key Features

  1. Automatic Memory Tracking
    • ChatGPT now automatically records information from your interactions such as preferences, interests, and plans. This allows the AI to refine its responses over time, making each conversation increasingly tailored to you.
  2. Enhanced Personalization
    • The more you interact with ChatGPT, the better it understands your needs and adapts its responses accordingly. This personalization improves the relevance and efficiency of your interactions, whether you’re asking for daily tasks or discussing complex topics.
  3. Memory Management Options
    • You have full control over this feature. You can view what information is stored, toggle the memory on or off, and delete specific data or all memory entries, ensuring your privacy and preferences are respected.




From DSC:
The ability of AI-based applications to remember things about us will have major and positive ramifications for us when we think about learning-related applications of AI.


 

Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com

AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.

The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.


A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:

Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.

 

What Are AI Agents—And Who Profits From Them? — from every.to by Evan Armstrong
The newest wave of AI research is changing everything

I’ve spent months talking with founders, investors, and scientists, trying to understand what this technology is and who the players are. Today, I’m going to share my findings. I’ll cover:

  • What an AI agent is
  • The major players
  • The technical bets
  • The future

Agentic workflows are loops—they can run many times in a row without needing a human involved for each step in the task. A language model will make a plan based on your prompt, utilize tools like a web browser to execute on that plan, ask itself if that answer is right, and close the loop by getting back to you with that answer.

But agentic workflows are an architecture, not a product. It gets even more complicated when you incorporate agents into products that customers will buy.

Early reports of GPT-5 are that it is “materially better” and is being explicitly prepared for the use case of AI agents.

 

The $340 Billion Corporate Learning Industry Is Poised For Disruption — from joshbersin.com by Josh Bersin

What if, for example, the corporate learning system knew who you were and you could simply ask it a question and it would generate an answer, a series of resources, and a dynamic set of learning objects for you to consume? In some cases you’ll take the answer and run. In other cases you’ll pour through the content. And in other cases you’ll browse through the course and take the time to learn what you need.

And suppose all this happened in a totally personalized way. So you didn’t see a “standard course” but a special course based on your level of existing knowledge?

This is what AI is going to bring us. And yes, it’s already happening today.

 

How to Make the Dream of Education Equity (or Most of It) a Reality — from nataliewexler.substack.com by Natalie Wexler
Studies on the effects of tutoring–by humans or computers–point to ways to improve regular classroom instruction.

One problem, of course, is that it’s prohibitively expensive to hire a tutor for every average or struggling student, or even one for every two or three of them. This was the two-sigma “problem” that Bloom alluded to in the title of his essay: how can the massive benefits of tutoring possibly be scaled up? Both Khan and Zuckerberg have argued that the answer is to have computers, maybe powered by artificial intelligence, serve as tutors instead of humans.

From DSC:
I’m hoping that AI-backed learning platforms WILL help many people of all ages and backgrounds. But I realize — and appreciate what Natalie is saying here as well — that human beings are needed in the learning process (especially at younger ages). 

But without the human element, that’s unlikely to be enough. Students are more likely to work hard to please a teacher than to please a computer.

Natalie goes on to talk about training all teachers in cognitive science — a solid idea for sure. That’s what I was trying to get at with this graphic:
.

We need to take more of the research from learning science and apply it in our learning spaces.

.
But I’m not as hopeful in all teachers getting trained in cognitive science…as it should have happened (in the Schools of Education and in the K12 learning ecosystem at large) by now. Perhaps it will happen, given enough time.

And with more homeschooling and blended programs of education occurring, that idea gets stretched even further. 

K-12 Hybrid Schooling Is in High Demand — from realcleareducation.com by Keri D. Ingraham (emphasis below from DSC); via GSV

Parents are looking for a different kind of education for their children. A 2024 poll of parents reveals that 72% are considering, 63% are searching for, and 44% have selected a new K-12 school option for their children over the past few years. So, what type of education are they seeking?

Additional polling data reveals that 49% of parents would prefer their child learn from home at least one day a week. While 10% want full-time homeschooling, the remaining 39% of parents desire their child to learn at home one to four days a week, with the remaining days attending school on-campus. Another parent poll released this month indicates that an astonishing 64% of parents indicated that if they were looking for a new school for their child, they would enroll him or her in a hybrid school.

 

Which AI should I use? Superpowers and the State of Play — from by Ethan Mollick
And then there were three

For over a year, GPT-4 was the dominant AI model, clearly much smarter than any of the other LLM systems available. That situation has changed in the last month, there are now three GPT-4 class models, all powering their own chatbots: GPT-4 (accessible through ChatGPT Plus or Microsoft’s CoPilot), Anthropic’s Claude 3 Opus, and Google’s Gemini Advanced1.

Where we stand
We are in a brief period in the AI era where there are now multiple leading models, but none has yet definitively beaten the GPT-4 benchmark set over a year ago. While this may represent a plateau in AI abilities, I believe this is likely to change in the coming months as, at some point, models like GPT-5 and Gemini 2.0 will be released. In the meantime, you should be using a GPT-4 class model and using it often enough to learn what it does well. You can’t go wrong with any of them, pick a favorite and use it…

From DSC:
Here’s a powerful quote from Ethan:

In fact, in my new book I postulate that you haven’t really experienced AI until you have had three sleepless nights of existential anxiety, after which you can start to be productive again.


Using AI for Immersive Educational Experiences — from automatedteach.com by Graham Clay
Realistic video brings course content to life but requires AI literacy.

For us, I think the biggest promise of AI tools like Sora — that can create video with ease — is that they lower the cost of immersive educational experiences. This increases the availability of these experiences, expanding their reach to student populations who wouldn’t otherwise have them, whether due to time, distance, or expense.

Consider the profound impact on a history class, where students are transported to California during the gold rush through hyperrealistic video sequences. This vivifies the historical content and cultivates a deeper connection with the material.

In fact, OpenAI has already demonstrated the promise of this sort of use case, with a very simple prompt producing impressive results…


The Empathy Illusion: How AI Agents Could Manipulate Students — from marcwatkins.substack.com by Marc Watkins

Take this scenario. A student misses a class and, within twenty minutes, receives a series of texts and even a voicemail from a very concerned and empathic-sounding voice wanting to know what’s going on. Of course, the text is entirely generated, and the voice is synthetic as well, but the student likely doesn’t know this. To them, communication isn’t something as easy to miss or brush off as an email. It sounds like someone who cares is talking to them.

But let’s say that isn’t enough. By that evening, the student still hadn’t logged into their email or checked the LMS. The AI’s strategic reasoning is communicating with the predictive AI and analyzing the pattern of behavior against students who succeed or fail vs. students who are ill. The AI tracks the student’s movements on campus, monitors their social media usage, and deduces the student isn’t ill and is blowing off class.

The AI agent resumes communication with the student. But this time, the strategic AI adopts a different persona, not the kind and empathetic persona used for the initial contact, but a stern, matter-of-fact one. The student’s phone buzzes with alerts that talk about scholarships being lost, teachers being notified, etc. The AI anticipates the excuses the student will use and presents evidence tracking the student’s behavior to show they are not sick.


Not so much focused on learning ecosystems, but still worth mentioning:

The top 100 Gen AI Consumer Apps — from a16z.com / andreessen horowitz by Olivia Moore


 

 

From DSC:
This would be huge for all of our learning ecosystems, as the learning agents could remember where a particular student or employee is at in terms of their learning curve for a particular topic.


Say What? Chat With RTX Brings Custom Chatbot to NVIDIA RTX AI PCs — from blogs.nvidia.com
Tech demo gives anyone with an RTX GPU the power of a personalized GPT chatbot.



 

Top 6 Use Cases of Generative AI in Education in 2024 — from research.aimultiple.com by Cem Dilmegani

Use cases included:

  1. Personalized Lessons
  2. Course Design
  3. Content Creation for Courses
  4. Data Privacy Protection for Analytical Models
  5. Restoring Old Learning Materials
  6. Tutoring

The Next Phase of AI in Education at the U.S. Department of Education — from medium.com by Office of Ed Tech

Why are we doing this work?
Over the past two years, the U.S. Department of Education has been committed to maintaining an ongoing conversation with educators, students, researchers, developers — and the educational community at large — related to the continuous progress of Artificial Intelligence (AI) development and its implications for teaching and learning.

Many educators are seeking resources clarifying what AI is and how it will impact their work and their students. Similarly, developers of educational technology (“edtech”) products seek guidance on what guardrails exist that can support their efforts. After the release of our May 2023 report Artificial Intelligence and the Future of Teaching and Learningwe heard the desire for more.


2024 EDUCAUSE AI Landscape Study — from library.educause.edu by Jenay Robert

Moving from reaction to action, higher education stakeholders are currently exploring the opportunities afforded by AI for teaching, learning, and work while maintaining a sense of caution for the vast array of risks AI-powered technologies pose. To aid in these efforts, we present this inaugural EDUCAUSE AI Landscape Study, in which we summarize the higher education community’s current sentiments and experiences related to strategic planning and readiness, policies and procedures, workforce, and the future of AI in higher education.


AI Update for K-16 Administrators: More People Need to Step-Up and Take the AI Bull By the Horns — from stefanbauschard.substack.com by Stefan Bauschard
AI capabilities are way beyond what most schools are aware of, and they will transform education and society over the next few years.

Educational administrators should not worry about every AI development, but should, instead focus on the big picture, as those big picture changes will change the entire world and the educational system.

AI and related technologies (robotics, synthetic biology, and brain-computer interfaces) will continue to impact society and the entire educational system over the next 10 years. This impact on the system will be greater than anything that has happened over the last 100 years, including COVID-19, as COVID-19 eventually ended and the disruptive force of these technologies will only continue to develop.

AI is the bull in the China Shop, redefining the world and the educational system. Students writing a paper with AI is barely a poke in the educational world relative to what is starting to happen (active AI teachers and tutors; AI assessment; AI glasses; immersive learning environments; young students able to start their own business with AI tools; AIs replacing and changing jobs; deep voice and video fakes; intelligence leveling; individualized instruction; interactive and highly intelligent computers; computers that can act autonomously; and more).


 

 

Healthcare High Schools — from the-job.beehiiv.com by Paul Fain
Bloomberg and hospitals back dual-enrollment path from K-12 to high-demand jobs.

More career exploration in high school is needed to help Americans make better-informed choices about their education and job options, experts agree. And serious, employer-backed efforts to tighten connections between school and work are likely to emerge first in healthcare, given the industry’s severe staffing woes.

A new $250M investment by Bloomberg Philanthropies could be an important step in this direction. The money will seed the creation of healthcare-focused high schools in 10 U.S. locations, with a plan to enroll 6K students who will graduate directly from the early-college high schools into high-demand healthcare jobs that pay family-sustaining wages.


Microschools Take Center Stage with New Opportunities for Learning for 2024 — from the74million.org by Andrew Campanella
Campanella: More than 27,000 schools and organizations are celebrating National School Choice Week. Yours can, too

Last year, the landscape of K-12 education transformed as a record-breaking 20 states expanded school choice options. However, that is not the only school choice story to come out of 2023. As the nation steps into 2024, a fresh emphasis on innovation has emerged, along with new options for families. This is particularly true within the realm of microschooling.

Microschooling is an education model that is small by design — typically with 15 or fewer students of varying ages per class. It fosters a personalized and community-centric approach to learning that is especially effective in addressing the unique educational needs of diverse student populations. Programs like Education Savings Accounts are helping to fuel these microschools.


My Students Can’t Meet Academic Standards Because the School Model No Longer Fits Them — from edsurge.com by Sachin Pandya

Large classes create more distractions for students who struggle to focus, and they inevitably get less attention and support as there are more students for teachers to work with. High numbers of students make it more difficult to plan for individual needs and force teachers to teach to an imaginary middle. A rigid schedule makes it easy to schedule adults and services, but it is a challenge for kids who need time to get engaged and prefer to keep working at a challenge once they are locked in.

Now that I know what can engage and motivate these students, I can imagine creating more opportunities that allow them to harness their talents and grow their skills and knowledge. But we’re already a third of the way through the school year, and my curriculum requires me to teach certain topics for certain lengths of time, which doesn’t leave room for many of the types of experiences these kids need. Soon, June will come and I’ll pass them along to the next teacher, who won’t know what I know and will need another four months to learn it, wasting valuable time in these students’ educations.

From DSC:
We need teachers and professors to be able to contribute to learners’ records. Each student can review and decide whether they want to allow access to other teachers– or even to employers. Educators could insert what they’ve found to work with a particular student, what passions/interests that student has, or what to avoid (if possible). For example, has this student undergone some trauma, and therefore trauma-informed teaching should be employed. 

IEPs could be a part of learners’ records/profiles. The teams working on implementing these IEP’s could share important, searchable information.


The State of Washington Embraces AI for Public Schools — from synthedia.substack.com by Bret Kinsella; via Tom Barrett
Educational institutions may be warming up to generative AI

Washington state issued new guidelines for K-12 public schools last week based on the principle of “embracing a human-centered approach to AI,” which also embraces the use of AI in the education process. The state’s Superintendent of Public Instruction, Chris Reykdal, commented in a letter accompanying the new guidelines:

 

Augment teaching with AI – this teacher has it sussed… — from donaldclarkplanb.blogspot.com by Donald Clark

Emphasis (emphasis DSC):

You’re a teacher who wants to integrate AI into your teaching. What do you do? I often get asked how should I start with AI in my school or University. This, I think, is one answer.

Continuity with teaching
One school has got this exactly right in my opinion. Meredith Joy Morris has implemented ChatGPT into the teaching process. The teacher does their thing and the chatbot picks up where the teacher stops, augmenting and scaling the teaching and learning process, passing the baton to the learners who carry on. This gives the learner a more personalised experience, encouraging independent learning by using the undoubted engagement that 1:1 dialogue provides.

There’s no way any teacher can provide this carry on support with even a handful of students, never mind a class of 30 or a course with 100. Teaching here is ‘extended’ and ‘scaled’ by AI. The feedback from the students was extremely positive.


Reflections on Teaching in the AI Age — from by Jeffrey Watson

The transition which AI forces me to make is no longer to evaluate writings, but to evaluate writers. I am accustomed to grading essays impersonally with an objective rubric, treating the text as distinct from the author and commenting only on the features of the text. I need to transition to evaluating students a bit more holistically, as philosophers – to follow along with them in the early stages of the writing process, to ask them to present their ideas orally in conversation or in front of their peers, to push them to develop the intellectual virtues that they will need if they are not going to be mastered by the algorithms seeking to manipulate them. That’s the sort of development I’ve meant to encourage all along, not paragraph construction and citation formatting. If my grading practices incentivize outsourcing to a machine intelligence, I need to change my grading practices.


4 AI Imperatives for Higher Education in 2024 — from campustechnology.com by Rhea Kelly

[Bryan Alexander] There’s a crying need for faculty and staff professional development about generative AI. The topic is complicated and fast moving. Already the people I know who are seriously offering such support are massively overscheduled. Digital materials are popular. Books are lagging but will gradually surface. I hope we see more academics lead more professional development offerings.

For an academic institution to take emerging AI seriously it might have to set up a new body. Present organizational nodes are not necessarily a good fit.


A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Can’t Be Done. — from edsurge.com by Jeffrey R. Young
Is there a better metaphor than ‘tutor’ for what generative AI can do to help students and teachers?

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBM’s Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. “I remember telling IBM top brass that this is going to be a 25-year journey,” he recently told EdSurge.


Teachers stan AI in education–but need more support — from eschoolnews.com by Laura Ascione

What are the advantages of AI in education?
Canva’s study found 78 percent of teachers are interested in using AI education tools, but their experience with the technology remains limited, with 93 percent indicating they know “a little” or “nothing” about it – though this lack of experience hasn’t stopped teachers quickly discovering and considering its benefits:

  • 60 percent of teachers agree it has given them ideas to boost student productivity
  • 59 percent of teachers agree it has cultivated more ways for their students to be creative
  • 56 percent of teachers agree it has made their lives easier

When looking at the ways teachers are already using generative artificial intelligence, the most common uses were:

  • Creating teaching materials (43 percent)
  • Collaborative creativity/co-creation (39 percent)
  • Translating text (36 percent)
  • Brainstorming and generating ideas (35 percent)

The next grand challenge for AI — from ted.com by Jim Fan


The State of Washington Embraces AI for Public Schools — from synthedia.substack.com by Bret Kinsella; via Tom Barrett
Educational institutions may be warming up to generative AI

Washington state issued new guidelines for K-12 public schools last week based on the principle of “embracing a human-centered approach to AI,” which also embraces the use of AI in the education process. The state’s Superintendent of Public Instruction, Chris Reykdal, commented in a letter accompanying the new guidelines:


New education features to help teachers save time and support students — from by Shantanu Sinha

Giving educators time back to invest in themselves and their students
Boost productivity and creativity with Duet AI: Educators can get fresh ideas and save time using generative AI across Workspace apps. With Duet AI, they can get help drafting lesson plans in Docs, creating images in Slides, building project plans in Sheets and more — all with control over their data.

 
© 2025 | Daniel Christian