Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.
…
Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.
Here are the five most popular stories that Education Week published in 2024 about AI in schools.
Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:
1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change
Checking the Pulse: The Impact of AI on Everyday Lives So, what exactly did our users have to say about how AI transformed their lives this year? .
Top 2024 Developments in AI
Video Generation…
AI Employees…
Open Source Advancements…
Getting ready for 2025: your AI team members (Gift lesson 3/3) — from flexos.com by Daan van Rossum
And that’s why today, I’ll tell you exactly which AI tools I’ve recommended for the top 5 use cases to almost 200 business leaders who took the Lead with AI course.
1. Email Management: Simplifying Communication with AI
Microsoft Copilot for Outlook. …
Gemini AI for Gmail. …
Grammarly. …
2. Meeting Management: Maximize Your Time
Otter.ai. …
Copilot for Microsoft Teams. …
Other AI Meeting Assistants. Zoom AI Companion, Granola, and Fathom
3. Research: Streamlining Information Gathering
ChatGPT. …
Perplexity. …
Consensus. …
…plus several more items and tools that were mentioned by Daan.
Introducing the 2025 Wonder Media Calendar for tweens, teens, and their families/households.Designed by Sue Ellen Christian and her students in her Global Media Literacy class (in the fall 2024 semester at Western Michigan University), the calendar’s purpose is to help people create a new year filled with skills and smart decisions about their media use. This calendar is part of the ongoing Wonder Media Library.comproject that includes videos, lesson plans, games, songs and more. The website is funded by a generous grant from the Institute of Museum and Library Services, in partnership with Western Michigan University and the Library of Michigan.
Picture your enterprise as a living ecosystem,where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.
Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?
The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.
Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.
What Is Agentic AI? — from blogs.nvidia.com by Erik Pounds Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.
…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.
I assert that in the future, people will be able to see this information on a 24x7x365 basis.
Which jobs are in demand?
What skills do I need to do those jobs?
WHERE do I get/develop those skills?
And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.
BUT PEOPLE are the key! Oftentimes, we need to — and prefer to — learn with others!
In September, I partnered with Synthesia to conduct a comprehensive survey exploring the evolving landscape of instructional design.
Our timing was deliberate: as we witness the rapid advancement of AI and increasing pressure on learning teams to drive mass re-skilling and deliver more with less, we wanted to understand how the role of instructional designers is changing.
…
Our survey focused on five key areas that we believed would help surface the most important data about the transformation of our field:
Roles & Responsibilities: who’s designing learning experiences in 2024?
Success Metrics: how do you and the organisations you work for measure the value of instructional design?
Workload & Workflow: how much time do we spend on different aspects of our job, and why?
Challenges & Barriers: what sorts of obstacles prevent us from producing optimal work?
Tools & Technology: what tools do we use, and is the tooling landscape changing?
I wish I could write that the last two years have made me more confident, more self-assured that AI is here to augment workers rather than replace them.
But I can’t.
I wish I could write that I know where schools and colleges will end up. I wish I could say that AI Agents will help us get where we need to be.
But I can’t.
At this point, today, I’m at a loss. I’m not sure where the rise of AI agents will take us, in terms of how we work and learn. I’m in the question-asking part of my journey. I have few answers.
So, let’s talk about where (I think) AI Agents will take education. And who knows? Maybe as I write I’ll come up with something more concrete.
It’s worth a shot, right?
From DSC: I completely agree with Jason’s following assertion:
A good portion of AI advancement will come down to employee replacement. And AI Agents push companies towards that.
THAT’s where/what the ROI will be for corporations. They will make their investments up in the headcount area, and likely in other areas as well (product design, marketing campaigns, engineering-related items, and more). But how much time it takes to get there is a big question mark.
One last quote here…it’s too good not to include:
Behind these questions lies a more abstract, more philosophical one: what is the relationship between thinking and doing in a world of AI Agents and other kinds of automation?
By examining models across three AI families—Claude, ChatGPT, and Gemini—I’ve started to identify each model’s strengths, limitations, and typical pitfalls.
Spoiler: my findings underscore that until we have specialised, fine-tuned AI copilots for instructional design, we should be cautious about relying on general-purpose models and ensure expert oversight in all ID tasks.
From DSC — I’m going to (have Nick) say this again:
I simply asked my students to use AI to brainstorm their own learning objectives. No restrictions. No predetermined pathways. Just pure exploration. The results? Astonishing.
Students began mapping out research directions I’d never considered. They created dialogue spaces with AI that looked more like intellectual partnerships than simple query-response patterns.
Google Workspace for Education admins can now turn on the Gemini app with added data protection as an additional service for their teen users (ages 13+ or the applicable age in your country) in the following languages and countries. With added data protection, chats are not reviewed by human reviewers or otherwise used to improve AI models. The Gemini app will be a core service in the coming weeks for Education Standard and Plus users, including teens,
Recently, I spoke with several teachers regarding their primary questions and reflections on using AI in teaching and learning. Their thought-provoking responses challenge us to consider not only what AI can do but what it means for meaningful and equitable learning environments. Keeping in mind these reflections, we can better understand how we move forward toward meaningful AI integration in education.
We’re introducing FrontierMath, a benchmark of hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems. These problems span major branches of modern mathematics—from computational number theory to abstract algebraic geometry—and typically require hours or days for expert mathematicians to solve.
The demand for artificial intelligence courses in UK universities has surged dramatically over the past five years, with enrollments increasing by 453%, according to a recent study by Currys, a UK tech retailer.
The study, which analyzed UK university admissions data and surveyed current students and recent graduates, reveals how the growing influence of AI is shaping students’ educational choices and career paths.
This growth reflects the broader trend of AI integration across industries, creating new opportunities while transforming traditional roles. With AI’s influence on career prospects rising, students and graduates are increasingly drawn to AI-related courses to stay competitive in a rapidly changing job market.
Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.
It’s the beginning of the end for search as we know it.
Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.
The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.
Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:
Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
…plus several other items
The posting also lists several ideas to try with NotebookLM such as:
Idea 2: Study Companion
Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
Get a breakdown of the course materials to understand them better.
“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”
With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.
1. Upload a variety of sources for NotebookLM to use.
You can use …
websites
PDF files
links to websites
any text you’ve copied
Google Docs and Slides
even Markdown
You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).
2. Ask it to create resources. 3. Create an audio summary. 4. Chat with your sources.
5. Save (almost) everything.
I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.
The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.
4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.
As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:
Incorporate personal experiences and local content into assignments
Ask students for multi-modal deliverables
Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.
Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions…
From DSC: Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).
This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.
…
For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.
The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.
It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.
By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.
Advanced Voice Mode on ChatGPT features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.
Advanced Voice Mode on ChatGPT is currently in a limited alpha. Please note that it may make mistakes, and access and rate limits are subject to change.
From DSC: Think about the impacts/ramifications of global, virtual, real-time language translations!!! This type of technology will create very powerful, new affordances in our learning ecosystems — as well as in business communications, with the various governments across the globe, and more!
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC: This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
AWS, Educause partner on generative AI readiness tool — from edscoop.com by Skylar Rispens Amazon Web Services and the nonprofit Educause announced a new tool designed to help higher education institutions gauge their readiness to adopt generative artificial intelligence.
Amazon Web Services and the nonprofit Educause on Monday announced they’ve teamed up to develop a tool that assesses how ready higher education institutions are to adopt generative artificial intelligence.
Through a series of curated questions about institutional strategy, governance, capacity and expertise, AWS and Educause claim their assessment can point to ways that operations can be improved before generative AI is adopted to support students and staff.
“Generative AI will transform how educators engage students inside and outside the classroom, with personalized education and accessible experiences that provide increased student support and drive better learning outcomes,” Kim Majerus, vice president of global education and U.S. state and local government at AWS, said in a press release. “This assessment is a practical tool to help colleges and universities prepare their institutions to maximize this technology and support students throughout their higher ed journey.”
Speaking of AI and our learning ecosystems, also see:
At a moment when the value of higher education has come under increasing scrutiny, institutions around the world can be exactly what learners and employers both need. To meet the needs of a rapidly changing job market and equip learners with the technical and ethical direction needed to thrive, institutions should familiarize students with the use of AI and nurture the innately human skills needed to apply it ethically. Failing to do so can create enormous risk for higher education, business and society.
What is AI literacy?
To effectively utilize generative AI, learners will need to grasp the appropriate use cases for these tools, understand when their use presents significant downside risk, and learn to recognize abuse to separate fact from fiction. AI literacy is a deeply human capacity. The critical thinking and communication skills required are muscles that need repeated training to be developed and maintained.