State of Higher Ed LMS Market for US and Canada: Year-End 2024 Edition — from onedtech.philhillaa.com by Phil Hill
Less about market share and more about diverging strategy
.
DeepSeek: How China’s AI Breakthrough Could Revolutionize Educational Technology — from nickpotkalitsky.substack.com by Nick Potkalitsky
Can DeepSeek’s 90% efficiency boost make AI accessible to every school?
The most revolutionary aspect of DeepSeek for education isn’t just its cost—it’s the combination of open-source accessibility and local deployment capabilities. As Azeem Azhar notes, “R-1 is open-source. Anyone can download and run it on their own hardware. I have R1-8b (the second smallest model) running on my Mac Mini at home.”
…
Real-time Learning Enhancement
- AI tutoring networks that collaborate to optimize individual learning paths
- Immediate, multi-perspective feedback on student work
- Continuous assessment and curriculum adaptation
The question isn’t whether this technology will transform education—it’s how quickly institutions can adapt to a world where advanced AI capabilities are finally within reach of every classroom.
Over 100 AI Tools for Teachers — from educatorstechnology.com by Med Kharbach, PhD
I know through your feedback on my social media and blog posts that several of you have legitimate concerns about the impact of AI in education, especially those related to data privacy, academic dishonesty, AI dependence, loss of creativity and critical thinking, plagiarism, to mention a few. While these concerns are valid and deserve careful consideration, it’s also important to explore the potential benefits AI can bring when used thoughtfully.
Tools such as ChatGPT and Claude are like smart research assistants that are available 24/7 to support you with all kinds of tasks from drafting detailed lesson plans, creating differentiated materials, generating classroom activities, to summarizing and simplifying complex topics. Likewise, students can use them to enhance their learning by, for instance, brainstorming ideas for research projects, generating constructive feedback on assignments, practicing problem-solving in a guided way, and much more.
The point here is that AI is here to stay and expand, and we better learn how to use it thoughtfully and responsibly rather than avoid it out of fear or skepticism.
Beth’s posting links to:
- Digital Education Council Global AI Faculty Survey 2025 — from digitaleducationcouncil.com
From Theory to Practice: How Generative AI is Redefining Instructional Materials — from edtechinsiders.substack.com by Alex Sarlin
Top trends and insights from The Edtech Insiders Generative AI Map research process about how Generative AI is transforming Instructional Materials
As part of our updates to the Edtech Insiders Generative AI Map, we’re excited to release a new mini market map and article deep dive on Generative AI tools that are specifically designed for Instructional Materials use cases.
In our database, the Instructional Materials use case category encompasses tools that:
- Assist educators by streamlining lesson planning, curriculum development, and content customization
- Enable educators or students to transform materials into alternative formats, such as videos, podcasts, or other interactive media, in addition to leveraging gaming principles or immersive VR to enhance engagement
- Empower educators or students to transform text, video, slides or other source material into study aids like study guides, flashcards, practice tests, or graphic organizers
- Engage students through interactive lessons featuring historical figures, authors, or fictional characters
- Customize curriculum to individual needs or pedagogical approaches
- Empower educators or students to quickly create online learning assets and courses
On a somewhat-related note, also see:
- Tech & Learning Announces Winners of Best of 2024 — from techlearning.com by TL Editors
This annual award celebrates recognizing the best in EdTech from 2024
6% of Faculty Feel Supported on AI?! — from automatedteach.com by Graham Clay
Plus, a webinar on building AI tutors this Friday.
The Digital Education Council just released their Global AI Faculty Survey of 1,681 faculty members from 52 institutions across 28 countries, and the findings are eye-opening. (Click here if you missed their analogous survey of students.)
While 86% of faculty see themselves using AI in their future teaching [p. 21], only 6% strongly agree that their institutions have provided sufficient resources to develop their AI literacy [p. 35].
This is a concerning gap between the recognized power of AI and institutional support, and it’s a clear signal about where higher education needs to focus in 2025.
Speaking with faculty about AI around the world, I’ve seen this firsthand. But let’s dig into the survey’s findings.
.
…
Why the gap? Well, one explanation is that faculty lack institutional support.
The survey reveals that…
- 80% of faculty don’t find their institutional AI guidelines comprehensive [p. 32]
- 80% say their institutions haven’t made clear how AI can be used in teaching [p. 33]
- The top barrier to AI adoption, at 40%? “I don’t have time or resources to explore AI” [p. 9]
- The second-highest barrier, at 38%? “I am not sure how to use AI in my teaching” [p. 9]
From DSC:
I was in a teaching and learning group for 10+ years (and in several edtech-related positions before that). We had a senior staff established there but we were mainly called upon for edtech, instructional technology, learning spaces, or LMS types of tasks and questions. Though we could have brought a lot of value to the pedagogical table, the vast majority of the faculty wanted to talk to other faculty members. Our group’s hard-earned — and expensive — expertise didn’t count. We ourselves were teaching classes..but not enough to be on par with the faculty members (at least in their minds). They didn’t seek us out. Perhaps we should have gone door to door, but we didn’t have the resources to do that.
Book groups were effective when the T&L group met with faculty members to discuss things. The discussions were productive. And in those groups, we DID have a seat at the pedagogical table.
But I’m not going to jump on the “we don’t have enough support” bandwagon. Faculty members seek out other faculty members. In many cases, if you aren’t faculty, you don’t count.
So if I were still working and I was in a leadership position, I would sponsor some book study groups with faculty and personnel from teaching and learning centers. Topics for those books could be:
- What AI is
- What those techs can offer
- What the LMS vendors are doing in this regard
- and ideas on how to use AI in one’s teaching
Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts
With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.
- Copy [the following] prompts into a document
- Use them in sequence as you write
- Adjust the word counts and specifics as needed
- Keep your responses for reference
- Use the same prompt template for similar sections to maintain consistency
Each prompt builds on the previous one, creating a systematic approach to helping you write your book.
Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts
We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)
If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.
Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025
AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:
- Break your own AI policies before you implement them. …
- Fund your failures. …
- Resist the pilot program. …
- Host Anti-Tech Tech Talks …
- …+ several more tips
While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.
Maria Anderson responded to Clay’s posting with this idea:
Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.
And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.
Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.
In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?
You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.
That’s where the new Grammarly feature comes in.
The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”
AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.
Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.
Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.
The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.
‘Lazy and Mediocre’ HR Team Fired After Manager’s Own CV Gets Auto-Rejected in Seconds, Exposing System Failure — from ibtimes.co.uk by Vinay Patel
The automated system’s error highlights the potential for bias and inefficiency in technology-driven HR practices
An entire HR team was terminated after their manager discovered and confirmed that their system automatically rejected all candidates — including his own application.
The manager wrote in their comment, “Auto rejection systems from HR make me angry.” They explained that while searching for a new employee, their HR department could not find a single qualified candidate in three months. As expected, the suspicious manager decided to investigate.
“I created myself a new email and sent them a modified version of my CV with a fake name to see what was going on with the process,” they wrote. “And guess what, I got auto-rejected. HR didn’t even look at my CV.”
When the manager reported the issue to upper management, “half of the HR department was fired in the following weeks.” A typographical error with significant consequences caused the entire problem.
The manager works in the tech industry and was trying to hire developers. However, HR had set up the system to search for developers with expertise in the wrong development software and one that no longer exists.
From DSC:
Back in 2017, I had survived several rounds of layoffs at the then Calvin College (now Calvin University) but I didn’t survive the layoff of 12 people in the spring of 2017. I hadn’t needed to interview for a new job in quite a while. So boy, did I get a wake-up call with discovering that Applicant Tracking Systems existed and could be tough to get past. (Also, the old-school job replacement firm that Calvin hired wasn’t much help in dealing with them either.)
I didn’t like these ATSs then, and I still have my concerns about them now. The above article points out that my concerns were/are at least somewhat founded. And if you take the entire day to research and apply for a position — only to get an instant reply back from the ATS — it’s very frustrating and discouraging.
Plus the ATSs may not pick up on nuances. An experienced human being might be able to see that a candidate’s skills are highly relevant and/or transferable to the position that they’re hiring for.
Networking is key of course. But not everyone has been taught about networking and not everyone gets past the ATS to get their resume viewed by a pair of human eyes. HR, IT, and any other relevant groups here need to be very careful with programming their ATSs.
Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025
At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.
Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.
Also mentioned by Philippa:
- AI for Education and Carnegie Learning’s new partnership
- India emerges as Global Leader in AI Education: Bosch Tech Compass 2025 — from medianews4u.com
57% Indians receive employer-provided AI training, surpassing Germany, and other European nations
Bengaluru: India is emerging as a global leader in artificial intelligence (AI) education, with over 50% of its population actively self-educating in AI-related skills, according to Bosch’s fourth annual Tech Compass Survey. The report highlights India’s readiness to embrace AI in work, education, and daily life, positioning the nation as a frontrunner in the AI revolution.
ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest
AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.
The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.
While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.
How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.
Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.
Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.
Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.
Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education
There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.
That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.
Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.
- Which Tool(s) To Use
- Training, Support, & Guidance, Oh My!
- Strategy: Setting a Direction…
Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?
What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.
NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey
PLUS: How to level up your AI workflows for 2025…
NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000).
…
And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy…
…
But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”
Jensen Huang’s latest CES speech: AI Agents are expected to become the next robotics industry, with a scale reaching trillions of dollars — from chaincatcher.com
NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.
At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.
Also see:
NVIDIA Project DIGITS: The World’s Smallest AI Supercomputer. — from nvidia.com
A Grace Blackwell AI Supercomputer on your desk.
From DSC:
I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.
OpenAI ‘now knows how to build AGI’ — from therundown.ai by Rowan Cheung
PLUS: AI phishing achieves alarming success rates
The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.
The details:
- Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
- Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
- Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
- AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.
Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.
***
“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.” — from Samsung
Understanding And Preparing For The 7 Levels Of AI Agents — from forbes.com by Douglas B. Laney
The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.
The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello
Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.
AI Agents Enhance Generative AI’s Impact
Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”
Here’s what nobody is telling you about AI agents in 2025 — from aidisruptor.ai by Alex McFarland
What’s really coming (and how to prepare).
Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.
Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.
This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.
Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.
Why 2025 will be the year of AI orchestration — from venturebeat.com by Emilia David|
In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.
As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.
In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.
Here are some themes we expect to see more in 2025.
Predictions For AI In 2025: Entrepreneurs Look Ahead — from forbes.com by Jodie Cook
AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.
“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”
…
Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.
Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”
Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.
These 4 graphs show where AI is already impacting jobs — from fastcompany.com by Brandon Tucker
With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.
To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.
Why does this shift in AI titles matter for every industry?
AI educators are coming to this school – and it’s part of a trend — from techradar.com by Eric Hal Schwartz
Two hours of lessons, zero teachers
- An Arizona charter school will use AI instead of human teachers for two hours a day on academic lessons.
- The AI will customize lessons in real-time to match each student’s needs.
- The company has only tested this idea at private schools before but claims it hugely increases student academic success.
One school in Arizona is trying out a new educational model built around AI and a two-hour school day. When Arizona’s Unbound Academy opens, the only teachers will be artificial intelligence algorithms in a perfect utopia or dystopia, depending on your point of view.
AI in Instructional Design: reflections on 2024 & predictions for 2025 — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, four new year’s resolutions for the AI-savvy instructional designer.
Debating About AI: A Free Comprehensive Guide to the Issues — from stefanbauschard.substack.com by Stefan Bauschard
In order to encourage and facilitate debate on key controversies related to AI, I put together this free 130+ page guide to the main arguments and ideas related to the controversies.
Universities need to step up their AGI game — from futureofbeinghuman.com by Andrew Maynard
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions
And because of this, I think there’s a unique opportunity for universities (research universities in particular) to up their game and play a leadership role in navigating the coming advanced AI transition.
Of course, there are already a number of respected university-based initiatives that are working on parts of the challenge. Stanford HAI (Human-centered Artificial Intelligence) is one that stands out, as does the Leverhulm Center for the Future of Intelligence at the University of Cambridge, and the Center for Governance of AI at the University of Oxford. But these and other initiatives are barely scratching the surface of what is needed to help successfully navigate advanced AI transitions.
If universities are to be leaders rather than bystanders in ensuring human flourishing in an age of AI, there’s an urgent need for bolder and more creative forward-looking initiatives that support research, teaching, thought leadership, and knowledge mobilization, at the intersection of advanced AI and all aspects of what it means to thrive and grow as a species.
How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein
Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.
…
Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.
Here are the five most popular stories that Education Week published in 2024 about AI in schools.
What’s next with AI in higher education? — from msn.com by Science X Staff
Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:
1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change
5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns
This is wild.
Hume AI just announced a new AI voice model.
It’s like ChatGPT Advanced Voice Mode, Elevenlabs Voice Design, and Google NotebookLM in one.
t can create a voice with a whole personality from a description or 5 second clip. And more.
8 wild examples: pic.twitter.com/XOA779jSiE
— Min Choi (@minchoi) December 23, 2024
AI in 2024: Insights From our 5 Million Readers — from linkedin.com by Generative AI
Checking the Pulse: The Impact of AI on Everyday Lives
So, what exactly did our users have to say about how AI transformed their lives this year?
.
Top 2024 Developments in AI
- Video Generation…
- AI Employees…
- Open Source Advancements…
Getting ready for 2025: your AI team members (Gift lesson 3/3) — from flexos.com by Daan van Rossum
And that’s why today, I’ll tell you exactly which AI tools I’ve recommended for the top 5 use cases to almost 200 business leaders who took the Lead with AI course.
1. Email Management: Simplifying Communication with AI
- Microsoft Copilot for Outlook. …
- Gemini AI for Gmail. …
- Grammarly. …
2. Meeting Management: Maximize Your Time
- Otter.ai. …
- Copilot for Microsoft Teams. …
- Other AI Meeting Assistants. Zoom AI Companion, Granola, and Fathom
3. Research: Streamlining Information Gathering
- ChatGPT. …
- Perplexity. …
- Consensus. …
…plus several more items and tools that were mentioned by Daan.
Tech Trends 2025 — from deloitte.com by Deloitte Insights
In Deloitte’s 16th annual Tech Trends report, AI is the common thread of nearly every trend. Moving forward, it will be part of the substructure of everything we do.
We propose that the future of technology isn’t so much about more AI as it is about ubiquitous AI. We expect that, going forward, AI will become so fundamentally woven into the fabric of our lives that it’s everywhere, and so foundational that we stop noticing it.
AI will eventually follow a similar path, becoming so ubiquitous that it will be a part of the unseen substructure of everything we do, and we eventually won’t even know it’s there. It will quietly hum along in the background, optimizing traffic in our cities, personalizing our health care, and creating adaptative and accessible learning paths in education. We won’t “use” AI. We’ll just experience a world where things work smarter, faster, and more intuitively—like magic, but grounded in algorithms. We expect that it will provide a foundation for business and personal growth while also adapting and sustaining itself over time.
Nowhere is this AI-infused future more evident than in this year’s Tech Trends report, which each year explores emerging trends across the six macro forces of information technology (figure 1). Half of the trends that we’ve chronicled are elevating forces—interaction, information, and computation—that underpin innovation and growth. The other half—the grounding forces of the business of technology, cyber and trust, and core modernization—help enterprises seamlessly operate while they grow.
Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar
Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.
Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?
The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.
Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer
Top AI tools for business — from theneuron.ai
This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends
Another busy AI news week, so I organized it into categories:
- Image to 3D
- AI Video
- AI Image Models & Tools
- AI Assistants / LLMs
- AI Creative Workflow: Luma AI Boards
Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.
Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.
What Is Agentic AI? — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.
What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.
Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?
That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.
While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.
OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn
If OpenAI has its way, the next online course you take might have a chatbot component.
Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.
“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”
15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use
There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.
Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience
Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.
In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.
The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.
What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?
They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.
…the above article links to:
ChatGPT Foundations for K–12 Educators — from commonsense.org
This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.
Learning outcomes:
- Understand what ChatGPT is and how it works.
- Demonstrate ways to use ChatGPT to support your teaching practices.
- Implement best practices for applying responsible AI principles in a school setting.
Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!
- NotebookLM: The Start of an AI Operating System
- Google is Serious About AI and Learning
- Google’s LearnLM Now Available in AI Studio
- Collaboration is King
- If You Give a Teacher a Ferrari
Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.
AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.
…
Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.
“In contemplating what the future holds, one must approach predictions with humility,” the study says…
“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”
…
The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.
AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.
The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.
Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.
Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin
A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.
This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.
Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.
Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.
The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year
- AI-Powered Analysis: Creating Detailed Learner Personas…
- AI-Powered Design: Optimising Instructional Strategies…
- AI-Powered Development & Implementation: Quality Assurance…
- AI-Powered Evaluation: Predictive Impact Assessment…
How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.
In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.
Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.
…
Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.
Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon
This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.
The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?
You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.
The Edtech Insiders Generative AI Map — from edtechinsiders.substack.com by Ben Kornell, Alex Sarlin, Sarah Morin, and Laurence Holt
A market map and database featuring 60+ use cases for GenAI in education and 300+ GenAI powered education tools.
A Student’s Guide to Writing with ChatGPT— from openai.com
Used thoughtfully, ChatGPT can be a powerful tool to help students develop skills of rigorous thinking and clear writing, assisting them in thinking through ideas, mastering complex concepts, and getting feedback on drafts.
There are also ways to use ChatGPT that are counterproductive to learning—like generating an essay instead of writing it oneself, which deprives students of the opportunity to practice, improve their skills, and grapple with the material.
For students committed to becoming better writers and thinkers, here are some ways to use ChatGPT to engage more deeply with the learning process.
Community Colleges Are Rolling Out AI Programs—With a Boost from Big Tech — from workshift.org by Colleen Connolly
The Big Idea: As employers increasingly seek out applicants with AI skills, community colleges are well-positioned to train up the workforce. Partnerships with tech companies, like the AI Incubator Network, are helping some colleges get the resources and funding they need to overhaul programs and create new AI-focused ones.
Along these lines also see:
Practical AI Training — from the-job.beehiiv.com by Paul Fain
Community colleges get help from Big Tech to prepare students for applied AI roles at smaller companies.
Miami Dade and other two-year colleges try to be nimble by offering training for AI-related jobs while focusing on local employers. Also, Intel’s business struggles while the two-year sector wonders if Republicans will cut funds for semiconductor production.
Can One AI Agent Do Everything? How To Redesign Jobs for AI? HR Expertise And A Big Future for L&D. — from joshbersin.com by Josh Bersin
Here’s the AI summary, which is pretty good.
In this conversation, Josh Bersin discusses the evolving landscape of AI platforms, particularly focusing on Microsoft’s positioning and the challenges of creating a universal AI agent. He delves into the complexities of government efficiency, emphasizing the institutional challenges faced in re-engineering government operations.
The conversation also highlights the automation of work tasks and the need for businesses to decompose job functions for better efficiency.
Bersin stresses the importance of expertise in HR, advocating for a shift towards full stack professionals who possess a broad understanding of various HR functions.
Finally, he addresses the impending disruption in Learning and Development (L&D) due to AI advancements, predicting a significant transformation in how L&D professionals will manage knowledge and skills.