From DSC:
After seeing Sam’s posting below, I can’t help but wonder:

  • How might the memory of an AI over time impact the ability to offer much more personalized learning?
  • How will that kind of memory positively impact a person’s learning-related profile?
  • Which learning-related agents get called upon?
  • Which learning-related preferences does a person have while learning about something new?
  • Which methods have worked best in the past for that individual? Which methods didn’t work so well with him or her?



 

Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.

 

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously — from venturebeat.com by Michael Nuñez|

Uplimit unveiled a suite of AI-powered learning agents today designed to help companies rapidly upskill employees while dramatically reducing administrative burdens traditionally associated with corporate training.

The San Francisco-based company announced three sets of purpose-built AI agents that promise to change how enterprises approach learning and development: skill-building agents, program management agents, and teaching assistant agents. The technology aims to address the growing skills gap as AI advances faster than most workforces can adapt.

“There is an unprecedented need for continuous learning—at a scale and speed traditional systems were never built to handle,” said Julia Stiglitz, CEO and co-founder of Uplimit, in an interview with VentureBeat. “The companies best positioned to thrive aren’t choosing between AI and their people—they’re investing in both.”


Introducing Claude for Education — from anthropic.com

Today we’re launching Claude for Education, a specialized version of Claude tailored for higher education institutions. This initiative equips universities to develop and implement AI-enabled approaches across teaching, learning, and administration—ensuring educators and students play a key role in actively shaping AI’s role in society.

As part of announcing Claude for Education, we’re introducing:

  1. Learning mode: A new Claude experience that guides students’ reasoning process rather than providing answers, helping develop critical thinking skills
  2. University-wide Claude availability: Full campus access agreements with Northeastern University, London School of Economics and Political Science (LSE), and Champlain College, making Claude available to all students
  3. Academic partnerships: Joining Internet2 and working with Instructure to embed AI into teaching & learning with Canvas LMS
  4. Student programs: A new Claude Campus Ambassadors program along with an initiative offering API credits for student projects

A comment on this from The Rundown AI:

Why it matters: Education continues to grapple with AI, but Anthropic is flipping the script by making the tech a partner in developing critical thinking rather than an answer engine. While the controversy over its use likely isn’t going away, this generation of students will have access to the most personalized, high-quality learning tools ever.


Should College Graduates Be AI Literate? — from chronicle.com by Beth McMurtrie (behind a paywall)
More institutions are saying yes. Persuading professors is only the first barrier they face.

Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.


Showing Up for the Future: Why Educators Can’t Sit Out the AI Conversation — from marcwatkins.substack.com with a guest post from Lew Ludwig

The Risk of Disengagement
Let’s be honest: most of us aren’t jumping headfirst into AI. At many of our institutions, it’s not a gold rush—it’s a quiet standoff. But the group I worry most about isn’t the early adopters. It’s the faculty who’ve decided to opt out altogether.

That choice often comes from a place of care. Concerns about data privacy, climate impact, exploitative labor, and the ethics of using large language models are real—and important. But choosing not to engage at all, even on ethical grounds, doesn’t remove us from the system. It just removes our voices from the conversation.

And without those voices, we risk letting others—those with very different priorities—make the decisions that shape what AI looks like in our classrooms, on our campuses, and in our broader culture of learning.



Turbocharge Your Professional Development with AI — from learningguild.com by Dr. RK Prasad

You’ve just mastered a few new eLearning authoring tools, and now AI is knocking on the door, offering to do your job faster, smarter, and without needing coffee breaks. Should you be worried? Or excited?

If you’re a Learning and Development (L&D) professional today, AI is more than just a buzzword—it’s transforming the way we design, deliver, and measure corporate training. But here’s the good news: AI isn’t here to replace you. It’s here to make you better at what you do.

The challenge is to harness its potential to build digital-ready talent, not just within your organization but within yourself.

Let’s explore how AI is reshaping L&D strategies and how you can leverage it for professional development.


5 Recent AI Notables — from automatedteach.com by Graham Clay

1. OpenAI’s New Image Generator
What Happened: OpenAI integrated a much more powerful image generator directly into GPT-4o, making it the default image creator in ChatGPT. Unlike previous image models, this one excels at accurately rendering text in images, precise visualization of diagrams/charts, and multi-turn image refinement through conversation.

Why It’s Big: For educators, this represents a significant advancement in creating educational visuals, infographics, diagrams, and other instructional materials with unprecedented accuracy and control. It’s not perfect, but you can now quickly generate custom illustrations that accurately display mathematical equations, chemical formulas, or process workflows — previously a significant hurdle in digital content creation — without requiring graphic design expertise or expensive software. This capability dramatically reduces the time between conceptualizing a visual aid and implementing it in course materials.
.


The 4 AI modes that will supercharge your workflow — from aiwithallie.beehiiv.com by Allie K. Miller
The framework most people and companies won’t discover until 2026


 

7 ways to use ChatGPT’s new image AI — from wondertools.substack.com by Jeremy Caplan
Transform your ideas into strong visuals

7 ways to use ChatGPT’s new image AI

  • Cartoons
  • Infographics
  • Posters
  • …plus several more

 

AI in Education Survey: What UK and US Educators Think in 2025 — from twinkl.com
As artificial intelligence (AI) continues to shape the world around us, Twinkl conducted a large-scale survey between January 15th and January 22nd to explore its impact on the education sector, as well as the work lives of teachers across the UK and the USA.

Teachers’ use of AI for work continues to rise
Twinkl’s survey asked teachers whether they were currently using AI for work purposes. Comparing these findings to similar surveys over recent years shows the use of AI tools by teachers has seen a significant increase across both the UK and USA.

  • According to two UK surveys by the National Literacy Trust – 30% of teachers used generative AI in 2023 and nearly half (47.7%) in 2024. Twinkl’s survey indicates that AI adoption continues to rise rapidly, with 60% of UK educators currently integrating it into their work lives in 2025.
  • Similarly, with 62% of US teachers currently using AI for work, uptake appears to have risen greatly in the past 12 months, with just 25% saying they were leveraging the new technology in the 2023-24 school year according to a RAND report.
  • Teachers are using AI more for work than in their personal lives: In the UK, personal usage drops to 43% (from 60% at school).  In the US, 52% are using AI for non-work purposes (versus 62% in education settings).

    60% of UK teachers and 62% of US teachers use AI in their work life in 2025.

 




Students and folks looking for work may want to check out:

Also relevant/see:


 

Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.


OpenAI’s Deep Research Agent Is Coming for White-Collar Work — from wired.com by Will Knight
The research-focused agent shows how a new generation of more capable AI models could automate some office tasks.

Isla Fulford, a researcher at OpenAI, had a hunch that Deep Research would be a hit even before it was released.

Fulford had helped build the artificial intelligence agent, which autonomously explores the web, deciding for itself what links to click, what to read, and what to collate into an in-depth report. OpenAI first made Deep Research available internally; whenever it went down, Fulford says, she was inundated with queries from colleagues eager to have it back. “The number of people who were DMing me made us pretty excited,” says Fulford.

Since going live to the public on February 2, Deep Research has proven to be a hit with many users outside the company too.


Nvidia to open quantum computing research center in Boston — from seekingalpha.com by Ravikash Bakolia

Nvidia (NASDAQ:NVDA) will open a quantum computing research lab in Boston which is expected to start operations later this year.

The Nvidia Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing, said the company in a March 18 press release.

Nvidia’s CEO Jensen Huang also made this announcement on Thursday at the company’s first-ever Quantum Day at its annual GTC event.


French quantum computer firm Pasqal links up with NVIDIA — from reuters.com

PARIS, March 21 (Reuters) – Pasqal, a fast-growing French quantum computer start-up company, announced on Friday a partnership with chip giant Nvidia (NVDA.O), opens new tab whereby Pasqal’s customers would gain access to more tools to develop quantum applications.

Pasqal said it would connect its quantum computing units and cloud platform onto NVIDIA’s open-source platform called CUDA-Q.


Introducing next-generation audio models in the API — from openai.com
A new suite of audio models to power voice agents, now available to developers worldwide.

Today, we’re launching new speech-to-text and text-to-speech audio models in the API—making it possible to build more powerful, customizable, and intelligent voice agents that offer real value. Our latest speech-to-text models set a new state-of-the-art benchmark, outperforming existing solutions in accuracy and reliability—especially in challenging scenarios involving accents, noisy environments, and varying speech speeds. These improvements increase transcription reliability, making the models especially well-suited for use cases like customer call centers, meeting note transcription, and more.


 

8 Weeks Left to Prepare Students for the AI-Enhanced Workplace — from insidehighered.com by Ray Schroeder
We are down to the final weeks left to fully prepare students for entry into the AI-enhanced workplace. Are your students ready?

The urgent task facing those of us who teach and advise students, whether they be degree program or certificate seeking, is to ensure that they are prepared to enter (or re-enter) the workplace with skills and knowledge that are relevant to 2025 and beyond. One of the first skills to cultivate is an understanding of what kinds of services this emerging technology can provide to enhance the worker’s productivity and value to the institution or corporation.

Given that short period of time, coupled with the need to cover the scheduled information in the syllabus, I recommend that we consider merging AI use into authentic assignments and assessments, supplementary modules, and other resources to prepare for AI.


Learning Design in the Era of Agentic AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Aka, how to design online async learning experiences that learners can’t afford to delegate to AI agents

The point I put forward was that the problem is not AI’s ability to complete online async courses, but that online async courses courses deliver so little value to our learners that they delegate their completion to AI.

The harsh reality is that this is not an AI problem — it is a learning design problem.

However, this realisation presents us with an opportunity which we overall seem keen to embrace. Rather than seeking out ways to block AI agents, we seem largely to agree that we should use this as a moment to reimagine online async learning itself.



8 Schools Innovating With Google AI — Here’s What They’re Doing — from forbes.com by Dan Fitzpatrick

While fears of AI replacing educators swirl in the public consciousness, a cohort of pioneering institutions is demonstrating a far more nuanced reality. These eight universities and schools aren’t just experimenting with AI, they’re fundamentally reshaping their educational ecosystems. From personalized learning in K-12 to advanced research in higher education, these institutions are leveraging Google’s AI to empower students, enhance teaching, and streamline operations.


Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.

 

AI Can’t Fix Bad Learning — from nafez.substack.com by Nafez Dakkak
Why pedagogy and good learning design still come first, and why faster isn’t always better.

I’ve followed Dr. Philippa Hardman’s work for years, and every time I engage with her work, I find it both refreshing and deeply grounded.

As one of the leading voices in learning design, Philippa has been able to cut through the noise and focus on what truly matters: designing learning experiences that actually work.

In an era where AI promises speed and scale, Philippa is making a different argument: faster isn’t always better. As the creator of Epiphany AI—figma for learning designers—Philippa is focused on closing the gap between what great learning design should look like and what’s actually being delivered.

While many AI tools optimize for the average, she believes the future belongs to those who can leverage AI without compromising on expertise or quality. Philippa wants learning designers to be more ambitious using AI to achieve what wasn’t possible before.

In this conversation, we explore why pedagogy must lead technology, how the return on expertise is only increasing in an AI-driven world, and why building faster doesn’t always mean building better.

An excerpted graphic:




Pearson, AWS Collaborate to Enhance AI-Powered Learning Functionality — from cloudwars.com

Pearson, the global educational publisher, and AWS have expanded their existing partnership to enhance AI-driven learning. AWS will help Pearson to deliver AI-powered lesson generation and more for educators, support workforce skilling initiatives, and continue an ongoing collaboration with Pearson VUE for AWS certification.


 

From DSC:
Look out Google, Amazon, and others! Nvidia is putting the pedal to the metal in terms of being innovative and visionary! They are leaving the likes of Apple in the dust.

The top talent out there is likely to go to Nvidia for a while. Engineers, programmers/software architects, network architects, product designers, data specialists, AI researchers, developers of robotics and autonomous vehicles, R&D specialists, computer vision specialists, natural language processing experts, and many more types of positions will be flocking to Nvidia to work for a company that has already changed the world and will likely continue to do so for years to come. 



NVIDIA’s AI Superbowl — from theneurondaily.com by Noah and Grant
PLUS: Prompt tips to make AI writing more natural

That’s despite a flood of new announcements (here’s a 16 min video recap), which included:

  1. A new architecture for massive AI data centers (now called “AI factories”).
  2. A physics engine for robot training built with Disney and DeepMind.
  3. partnership with GM to develop next-gen vehicles, factories and robots.
  4. A new Blackwell chip with “Dynamo” software that makes AI reasoning 40x faster than previous generations.
  5. A new “Rubin” chip slated for 2026 and a “Feynman” chip set for 2028.

For enterprises, NVIDIA unveiled DGX Spark and DGX Station—Jensen’s vision of AI-era computing, bringing NVIDIA’s powerful Blackwell chip directly to your desk.


Nvidia Bets Big on Synthetic Data — from wired.com by Lauren Goode
Nvidia has acquired synthetic data startup Gretel to bolster the AI training data used by the chip maker’s customers and developers.


Nvidia, xAI to Join BlackRock and Microsoft’s $30 Billion AI Infrastructure Fund — from investopedia.com by Aaron McDade
Nvidia and xAI are joining BlackRock and Microsoft in an AI infrastructure group seeking $30 billion in funding. The group was first announced in September as BlackRock and Microsoft sought to fund new data centers to power AI products.



Nvidia CEO Jensen Huang says we’ll soon see 1 million GPU data centers visible from space — from finance.yahoo.com by Daniel Howley
Nvidia CEO Jensen Huang says the company is preparing for 1 million GPU data centers.


Nvidia stock stems losses as GTC leaves Wall Street analysts ‘comfortable with long term AI demand’ — from finance.yahoo.com by Laura Bratton
Nvidia stock reversed direction after a two-day slide that saw shares lose 5% as the AI chipmaker’s annual GTC event failed to excite investors amid a broader market downturn.


Microsoft, Google, and Oracle Deepen Nvidia Partnerships. This Stock Got the Biggest GTC Boost. — from barrons.com by Adam Clark and Elsa Ohlen


The 4 Big Surprises from Nvidia’s ‘Super Bowl of AI’ GTC Keynote — from barrons.com by Tae Kim; behind a paywall

AI Super Bowl. Hi everyone. This week, 20,000 engineers, scientists, industry executives, and yours truly descended upon San Jose, Calif. for Nvidia’s annual GTC developers’ conference, which has been dubbed the “Super Bowl of AI.”


 

Drive Continuous Learning: AI Integrates Work & Training — from learningguild.com by George Hanshaw

Imagine with me for a moment: Training is no longer confined to scheduled sessions in a classroom, an online module or even a microlearning you click to activate during your workflow. Imagine training being delivered because the system senses what you are doing and provides instructions and job aids without you having to take an action.

The rapid evolution of artificial intelligence (AI) and wearable technology has made it easier than ever to seamlessly integrate learning directly into the workflow. Smart glasses, earpieces, and other advanced devices are redefining how employees gain knowledge and skills by delivering microlearning moments precisely when and where they are needed.

AI plays a crucial role in this transformation by sensing the optimal moment to deliver the training through augmented reality (AR).



These Schools Are Banding Together to Make Better Use of AI in Education — from edsurge.com by Emily Tate Sullivan

Kennelly and Geraffo are part of a small team at their school in Denver, DSST: College View High School, that is participating in the School Teams AI Collaborative, a year-long pilot initiative in which more than 80 educators from 19 traditional public and charter schools across the country are experimenting with and evaluating AI-enabled instruction to improve teaching and learning.

The goal is for some of AI’s earliest adopters in education to band together, share ideas and eventually help lead the way on what they and their colleagues around the U.S. could do with the emerging technology.

“Pretty early on we thought it was going to be a massive failure,” says Kennelly of last semester’s project. “But it became a huge hit. Students loved it. They were like, ‘I ran to second period to build this thing.’”



Transactional vs. Conversational Visions of Generative AI in Teaching — from elmartinsen.substack.com by Eric Lars Martinsen
AI as a Printer, or AI as a Thought Partner

As writing instructors, we have a choice in how we frame AI for our students. I invite you to:

  1. Experiment with AI as a conversation partner yourself before introducing it to students
  2. Design assignments that leverage AI’s strengths as a thought partner rather than trying to “AI-proof” your existing assignments
  3. Explicitly teach students how to engage in productive dialogue with AI—how to ask good questions, challenge AI’s assumptions, and use it to refine rather than replace their thinking
  4. Share your experiences, both positive and negative, with colleagues to build our collective understanding of effective AI integration

 

You can now use Deep Research without $200 — from flexos.work


Accelerating scientific breakthroughs with an AI co-scientist — from research.google by Juraj Gottweis and Vivek Natarajan

We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.


Now decides next: Generating a new future — from Deloitte.com
Deloitte’s State of Generative AI in the Enterprise Quarter four report

There is a speed limit. GenAI technology continues to advance at incredible speed. However, most organizations are moving at the speed of organizations, not at the speed of technology. No matter how quickly the technology advances—or how hard the companies producing GenAI technology push—organizational change in an enterprise can only happen so fast.

Barriers are evolving. Significant barriers to scaling and value creation are still widespread across key areas. And, over the past year regulatory uncertainty and risk management have risen in organizations’ lists of concerns to address. Also, levels of trust in GenAI are still moderate for the majority of organizations. Even so, with increased customization and accuracy of models—combined with a focus on better governance— adoption of GenAI is becoming more established.

Some uses are outpacing others. Application of GenAI is further along in some business areas than in others in terms of integration, return on investment (ROI) and expectations. The IT function is most mature; cybersecurity, operations, marketing and customer service are also showing strong adoption and results. Organizations reporting higher ROI for their most scaled initiatives are broadly further along in their GenAI journeys.

 

Nvidia helps launch AI platform for teaching American Sign Language — from venturebeat.com by Dean Takahashi; via Claire Zau

Nvidia has unveiled a new AI platform for teaching people how to use American Sign Language to help bridge communication gaps.

The Signs platform is creating a validated dataset for sign language learners and developers of ASL-based AI applications.

Nvidia, the American Society for Deaf Children and creative agency Hello Monday are helping close this gap with Signs, an interactive web platform built to support ASL learning and the development of accessible AI applications.


Using Gen AI to Design, Implement, and Assess PBL — from gettingsmart.com by David Ross

Key Points

  • Generative AI can significantly reduce the time and effort required in designing PBL by providing tools for research, brainstorming, and organization.
  • AI tools can assist educators in managing project implementation and assessment, providing formative feedback and organizing resources efficiently.

I usually conclude blogs with some pithy words, but this time I’ll turn the microphone over to Rachel Harcrow, a high school English/Language Arts teacher at Young Women’s College Prep Charter School of Rochester, NY: “After years of struggling to call myself a PBL practitioner, I finally feel comfortable saying I am, thanks to the power of Gen AI,” Harcrow told me. “Initial ideas now turn into fully fledged high-quality project plans in minutes that I can refine, giving me the space and energy to focus on what truly matters: My students.”


AI Resources for District Leaders — from techlearning.com by Steve Baule
Educational leaders aiming to effectively integrate generative AI into their schools should consider several key resources

To truly harness the transformative power of generative AI in education, district leaders must navigate a landscape rich with resources and opportunities. By delving into state and national guidelines, exploring successful case studies, utilizing innovative planning tools, and engaging in professional development, educational leaders can craft robust implementation plans. These plans can then assist in integrating AI seamlessly into their schools and elevate the learning experience to new heights.


Anthropic brings ‘extended thinking’ to Claude, which can solves complex physics problems with 96.5% accuracy — from rdworldonline.com by Brian Buntz

Anthropic, a favorite frontier AI lab among many coders and genAI power users has unveiled Claude 3.7 Sonnet, its first “hybrid reasoning” AI model. It is capable of both near-instant answers and in-depth, step-by-step reasoning within a single system.

Users can toggle an extended thinking mode where the model self-reflects before answering, considerably improving performance on complex tasks like math, physics and coding. In early testing by the author, the model largely succeeded in creating lines of Python (related to unsupervised learning) that were close to 1,000 lines long that ran without error on the first or second try, including the unsupervised machine learning task shown below:


New Tools. Old Complaints. Why AI Won’t Kill Education or Fix it  — from coolcatteacher.com by Vicki Davis; via Stephen Downes

AI won’t kill education. But will it kill learning? The challenge isn’t AI itself—it’s whether students can still think for themselves when the answers are always one click away.

Wait. Before you go, let me ask you one thing.
AI has opportunities to help learning. But it also won’t fix it. The real question isn’t whether students can use AI—but whether they’re still learning without it.

Whether the learning is happening between the ears.

And so much of what we teach in schools isn’t the answers on a test. It answers questions like “What is my purpose in life?” “How do I make friends?” and “How can I help my team be stronger.” Questions that aren’t asked on a test but are essential to living a good life. These questions aren’t answered between the ears but within the heart.

That, my friends, is what teaching has always been about.

The heart.

And the heart of the matter is we have new challenges, but these are old complaints. Complaints since the beginning of time and teaching. And in those days, you didn’t need kids just to be able to talk about how to build a fire, they had to make one themselves. Their lives depend on it.

And these days, we need to build another kind of fire. A fire that sparks the joy of learning. The joy of the opportunities that await us sparked by some of the most powerful tools ever invented. Kids need to not be able to just talk about making a difference, they need to know how to build a better world tomorrow. Our lives depend on it.


How Debating Skills Can Help Us In The Fight Against AI — from adigaskell.org by Adi Gaskell

Debating skills have a range of benefits in the workplace, from helping to improve our communication to bolstering our critical thinking skills. Research from the University of Mississippi suggests it might also help us in the battle with AI in the workplace.

We can often assume that debate teaches us nothing more than how to argue our point, but in order to do this, we have to understand both our own take on a subject and that of our opponent. This allows us to see both sides of any issue we happen to be debating.

“Even though AI has offered a shortcut through the writing process, it actually still is important to be able to write and speak and think on your own,” the researchers explain. “That’s what the focus of this research is: how debate engenders those aspects of being able to write and speak and study and research on your own.”

 

The Learning & Development Global Sentiment Survey 2025 — from donaldhtaylor.co.uk by Don Taylor

The L&D Global Sentiment Survey, now in its 12th year, once again asked two key questions of L&D professionals worldwide:

  • What will be hot in workplace learning in 2025?
  • What are your L&D challenges in 2025?

For the obligatory question on what they considered ‘hot’ topics, respondents voted for one to three of 15 suggested options, plus a free text ‘Other’ option. Over 3,000 voters participated from nearly 100 countries. 85% shared their challenges for 2025.

The results show more interest in AI, a renewed focus on showing the value of L&D, and some signs of greater maturity around our understanding of AI in L&D.


 
© 2025 | Daniel Christian