Something Big Is Happening — from shumer.dev by Matt Shumer; see below from the BIG Questions Institute, where I got this article from

I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.


They’ve now done it. And they’re moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.


What “Something Big Is Happening” Means for Schools — from/by the BIG Questions Institute
Matt Shumer’s newsletter post Something Big is Happening has been read over 80 million times within the week when it was published, on February 9.

Still, it’s worth reading Shumer’s post. Given the claims and warnings in Something Big Is Happening (and countless other articles), how would you truly, honestly respond to these questions:

  • What will the purpose of school be in 5 years?
  • What are we doing now that we must leave behind right away?
  • What can we leave behind gradually?
  • What does rigor look like in this AI-powered world?
  • Does our strategy look like making adjustments at the margins or are we preparing our students for a fundamental shift?
  • What is our definition of success? How do the the implications of AI and jobs (and other important forces, from geopolitical shifts and climate change, to mental health needs and shifting generational values) impact the outcomes we prioritize? What is the story of success we want to pass on to our students and wider community?
 

6 Ed Tech Tools to Try in 2026 — from cultofpedagogy.com by Jennifer Gonzalez

It’s that time again ~ the annual round-up of tech tools we think are worth a look this year. This year I really feel like there’s something for everyone: history teachers, math and science teachers, people who run makerspaces, teachers interested in music or podcasting, writing teachers, special ed teachers, and anyone whose course content could be made clearer through graphic organizers.


Also somewhat relevant here, see:


 

Why Parents Aren’t Reading to Kids, and What It Means for Young Students — from the74million.org by Jessika Harkay
A recent study found less than half of children are read to daily. The consequences are serious for early learners who enter school unprepared.

For children not getting the benefits of being read to at home, the opportunity gap has widened, with those young students entering school unprepared compared to those who have been read to.

“The gap really begins very, very early on. I think we underestimate how large a gap we’re already seeing in kindergarten,” said Susan Neuman, professor of childhood and literacy education at New York University, adding she recently visited a New York City kindergarten classroom and saw some children who only knew two letters compared to others who were prepared to read phrases.

A 2019 Ohio State University study found a 5-year-old child who is read to daily would be exposed to nearly 300,000 more words than one who isn’t read to regularly.

 

News deserts hit new high and 50 million have limited access to local news, study finds — from medill.northwestern.edu
Federal funding cuts to public broadcasting may accelerate local news crisis

EVANSTON, ILL. — The number of local news deserts in the U.S. jumped to record levels this year as newspaper closures continued unabated, and funding cuts to public radio could worsen the problem in coming months, according to the Medill State of Local News Report 2025 released today.

While the local news crisis deepened overall, Medill researchers found cause for optimism — more than 300 local news startups have launched over the past five years, 80% of which were digital-only outlets.

For the fourth consecutive year, the Medill Local News Initiative at Northwestern University’s Medill School of Journalism, Media, Integrated Marketing Communications conducted a months-long, county-by-county survey of local news organizations to identify trends in the rapidly morphing local media landscape. Researchers looked at local newspapers, digital-only sites, ethnic media and public broadcasters.


               


How Local Newsrooms Are Rethinking Political Coverage — from adigaskell.org

For decades, election reporting in the U.S. has leaned heavily on the “horse race”—who’s up, who’s down, and who’s raising the most money. But new research from the University of Kansas suggests that this approach is starting to shift, thanks to a national training program aimed at helping journalists better engage with their communities.

The program, called Democracy SOS, encourages reporters to move beyond headline polls and campaign drama. Instead, it asks them to focus on the issues people care about and explain how those issues are being tackled. In other words: less spectacle, more substance.


Addendum on 11/13/25:

Why Losing Local Newspapers Costs More Than We Think — from adigaskell.org

So why can’t digital journalism fill the gap?
The researchers argue that online media isn’t a true replacement for local reporting. “If you’re in New York writing about San Francisco, you just don’t know the area,” they say. “You don’t have the context. You’re not there.”

Even local online reporters face pressure to chase clicks. “Every journalist now has a global audience,” they explain. “That means the stories that matter most—ones that require digging, patience, and a deep knowledge of the community—often get ignored.”

The takeaway: local newspapers may seem like an old-fashioned idea, but they play a key role in how communities function. And when they vanish, the costs go beyond the news.

 

There is no God Tier video model — from downes.ca by Stephen Downes

From DSC:
Stephen has some solid reflections and asks some excellent questions in this posting, including:

The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?


A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido
Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.

The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.


Claude for Life Sciences — from anthropic.com

Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.

We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.

Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.

To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.


AI as an access tool for neurodiverse and international staff — from timeshighereducation.com by Vanessa Mar-Molinero
Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers

Where AI helps without cutting academic corners
When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:

  • Drafting and tone calibration:
  • Language scaffolding:
  • Structure and templates: ..
  • Summarise and prioritise:
  • Accessibility by default:
  • Idea generation for pedagogy:
  • Translation and cultural mediation:

Beyond learning design: supporting pedagogical innovation in response to AI — from timeshighereducation.com by Charlotte von Essen
To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning


The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
A keynote at the AI and the Liberal Arts Symposium Conference

This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.

It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.

It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).

 

“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

Stanford Law Unveils liftlab, a Groundbreaking AI Initiative Focused on the Legal Profession’s Future — from law.stanford.edu

September 15, 2025 — Stanford, CA — Stanford Law School today announced the launch of the Legal Innovation through Frontier Technology Lab, or liftlab, to explore how artificial intelligence can reshape legal services—not just to make them faster and cheaper, but better and more widely accessible.

Led by Professor Julian Nyarko and Executive Director Megan Ma, liftlab is among the first academic efforts in legal AI to unite research, prototyping, and real-time collaboration with industry. While much of AI innovation in law has so far focused on streamlining routine tasks, liftlab is taking a broader and more ambitious approach. The goal is to tap AI’s potential to fundamentally change the way legal work serves society.


The divergence of law firms from lawyers — from jordanfurlong.substack.com by Jordan Furlong
LLMs’ absorption of legal task performance will drive law firms towards commoditized service hubs while raising lawyers to unique callings as trustworthy legal guides — so long as we do this right.

Generative AI is going to weaken and potentially dissolve that relationship. Law firms will become capable of generating output that can be sold to clients with no lawyer involvement at all.

Right now, it’s possible for an ordinary person to obtain from an LLM like ChatGPT-5 the performance of a legal task — the provision of legal analysis, the production of a legal instrument, the delivery of legal advice — that previously could only be acquired from a human lawyer.

I’m not saying a person should do that. The LLM’s output might be effective and reliable, or it might prove disastrously off-base. But many people are already using LLMs in this way, and in the absence of other accessible options for legal assistance, they will continue to do so.


Why legal professionals need purpose-built agentic AI — from legal.thomsonreuters.com by Frank Schilder with Thomson Reuters Labs

Highlights

  • Professional-grade agentic AI systems are architecturally distinct from consumer chatbots, utilizing domain-specific data and robust verification mechanisms to deliver the high accuracy and reliability essential for legal work, whereas consumer tools prioritize conversational flow using unvetted web data.
  • True agentic AI for legal professionals offers transparent, multi-agent workflows, integrates with authoritative legal databases for verification, and applies domain-specific reasoning to understand legal nuances, unlike traditional chatbots that lack this complexity and autonomy.
  • When evaluating legal AI, professionals should avoid solutions that lack workflow transparency, offer no human checkpoints for oversight, and cannot integrate with professional databases, ensuring the chosen tool enhances, rather than replaces, expert judgment.

How I Left Corporate Law to Become a Legal Tech Entrepreneur — from news.bloomberglaw.com by Adam Nguyen; behind a paywall

If you’re a lawyer wondering whether to take the leap into entrepreneurship, I understand the apprehension that comes with leaving a predictable path. Embracing the fear, uncertainty, challenges, and constant evolution integral to an entrepreneur’s journey has been worth it for me.


Lawyering In The Age Of AI: Why Artificial Intelligence Might Make Lawyers More Human — from abovethelaw.com by Lisa Lang and Joshua Horenstein
AI could rehumanize the legal profession.

AI is already adept at doing what law school trained us to do — identifying risks, spotting issues, and referencing precedent. What it’s not good at is nuance, trust, or judgment — skills that define great lawyering.

When AI handles some of the drudgery — like contract clause spotting and formatting — it gives us something precious back: time. That time forces lawyers to stop hiding behind legalese and impractical analysis. It allows — and even demands — that we communicate like leaders.

Imagine walking into a business meeting and, instead of delivering a 20-page memo, offering a single slide with a recommendation tied directly to company goals. That’s not just good lawyering; that’s leadership. And AI may be the catalyst that gets us there.

AI changes the game. When generative tools can translate clauses into plain English, the old value proposition of complexity begins to crumble. The playing field shifts — from who can analyze the most thoroughly to who can communicate the most clearly.

That’s not a threat. It’s an opportunity — one for lawyers to become better partners to the business by focusing on what matters most: sound judgment delivered in plain language.


 

AI firm Anthropic reaches landmark $1.5B copyright deal with book authors — from washingtonpost.com by Will Oremus; this is a gifted article
The authors hailed the settlement as a win for human creators after they alleged the company downloaded millions of books without permission.

Anthropic, the artificial intelligence company behind the popular chatbot Claude, will pay $1.5 billion to settle a class-action lawsuit brought by book publishers and authors, according to documents filed in federal court Friday.

The settlement allows Anthropic to avoid going to trial over claims that it violated copyrights by downloading millions of books without permission and storing digital copies of them. The company will not admit wrongdoing.

 

I Teach Creative Writing. This Is What A.I. Is Doing to Students. — from nytimes.com by Meghan O’Rourke; this is a gifted article.

We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for.

From DSC:
I almost feel like Meghan should right the words “this week” or “this month” after the above sentence. Whew! Things are moving fast.

For example, we’re now starting to see more agents hitting the scene — software that can DO things. But that can open up a can of worms too. 

Students know the ground has shifted — and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one.ce 

From DSC:
Universities and colleges have little choice but to integrate AI into their programs and offerings. There’s enough pressure on institutions of traditional higher education to prove their worth/value. Students and their families want solid ROI’s. Students know that they are going to need AI-related skills (see the link immediately below for example), or they are going to be left out of the competitive job search process.

A relevant resource here:

 

Teach business students to write like executives — from timeshighereducation.com by José Ignacio Sordo Galarza
Many business students struggle to communicate with impact. Teach them to pitch ideas on a single page to build clarity, confidence and work-ready communication skills

Many undergraduate business students transition into the workforce equipped with communication habits that, while effective in academic settings, prove ineffective in professional environments. At university, students are trained to write for professors, not executives. This becomes problematic in the workplace where lengthy reports and academic jargon often obscure rather than clarify intent. Employers seek ideas they can absorb in seconds. This is where the one-pager – a single-page, high-impact document that helps students develop clarity of thought, concise expression and strategic communication – proves effective.


Also from Times Higher Education, see:


Is the dissertation dead? If so, what are the alternatives? — from timeshighereducation.com by Rushana Khusainova, Sarah Sholl, & Patrick Harte
Dissertation alternatives, such as capstone projects and applied group-based projects, could better prepare graduates for their future careers. Discover what these might look like

The traditional dissertation, a longstanding pillar of higher education, is facing increasing scrutiny. Concerns about its relevance to contemporary career paths, limitations in fostering practical skills and the changing nature of knowledge production in the GenAI age have fuelled discussions about its continued efficacy. So, is the dissertation dead?

The dissertation is facing a number of challenges. It can be perceived as having little relevance to career aspirations in increasingly competitive job markets. According to The Future of Jobs Report 2025 by the World Economic Forum, employers demand and indeed prioritise skills such as collaborative problem-solving in diverse and complex contexts, which a dissertation might not demonstrate.

 

 

On blogging (again) — from by Martin Weller

I also pondered what functions blogging has provided for me over the years.

  • Continuity – as an individual you persist across multiple organisations, roles and jobs. Although I stayed in one institution, I had many roles and the blog wasn’t associated with one specific project. Now I have left it continues.
  • Holistic – you can blog about one topic, but over time I think some personality will creep in. You are not just one thing, you have a personal life, tastes, interests etc which will all feed into what you do. A blog allows this more rounded representation.
  • Experimentation – there is relatively low cost and risk for much of it (this may not be the case for many people online, we need to acknowledge), so you can try things, and if they don’t work, so what? Also you can try formats that conventional outlets might not be appropriate for.
  • Development – the blog has been both an intentional and unintentional vehicle for working up ideas, documenting the process and getting feedback, which have led to more substantial outputs, such as books, project proposals and papers. Most importantly though it has been the means through which I have continually developed writing.
  • Connecting – particularly in those halcyon early days, it was a good way of finding others, working on ideas together, sharing something of yourself. A lot of my career related personal friendships have resulted from blogging.
  • Publicity – I became at one point (the OU crisis of 2018) something of a public voice of the OU, and have often used the blog for projects such as GO-GN

That’s not a bad return for a lil’ ol’ blog. I couldn’t say the same for academic journals.

 


Tech check: Innovation in motion: How AI is rewiring L&D workflows — from chieflearningofficer.com by Gabrielle Pike
AI isn’t here to replace us. It’s here to level us up.

For today’s chief learning officer, the days of just rolling out compliance training are long gone. In 2025, learning and development leaders are architects of innovation, crafting ecosystems that are agile, automated and AI-infused. This quarter’s Tech Check invites us to pause, assess and get strategic about where tech is taking us. Because the goal isn’t more tools—it’s smarter, more human learning systems that scale with the business.

Sections include:

  • The state of AI in L&D: Hype vs. reality
  • AI in design: From static content to dynamic experiences
  • AI in development: Redefining production workflows
  • Strategic questions CLOs should be asking
  • Future forward: What’s next?
  • Closing thought

American Federation of Teachers (AFT) to Launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic and United Federation of Teachers — from aft.org

NEW YORK – The AFT, alongside the United Federation of Teachers and lead partner Microsoft Corp., founding partner OpenAI, and Anthropic, announced the launch of the National Academy for AI Instruction today. The groundbreaking $23 million education initiative will provide access to free AI training and curriculum for all 1.8 million members of the AFT, starting with K-12 educators. It will be based at a state-of-the-art bricks-and-mortar Manhattan facility designed to transform how artificial intelligence is taught and integrated into classrooms across the United States.

The academy will help address the gap in structured, accessible AI training and provide a national model for AI-integrated curriculum and teaching that puts educators in the driver’s seat.


Students Are Anxious about the Future with A.I. Their Parents Are, Too. — from educationnext.org by Michael B. Horn
The fast-growing technology is pushing families to rethink the value of college

In an era when the college-going rate of high school graduates has dropped from an all-time high of 70 percent in 2016 to roughly 62 percent now, AI seems to be heightening the anxieties about the value of college.

According to the survey, two-thirds of parents say AI is impacting their view of the value of college. Thirty-seven percent of parents indicate they are now scrutinizing college’s “career-placement outcomes”; 36 percent say they are looking at a college’s “AI-skills curriculum,” while 35 percent respond that a “human-skills emphasis” is important to them.

This echoes what I increasingly hear from college leadership: Parents and students demand to see a difference between what they are getting from a college and what they could be “learning from AI.”


This next item on LinkedIn is compliments of Ray Schroeder:



How to Prepare Students for a Fast-Moving (AI)World — from rdene915.com by Dr. Rachelle Dené Poth

Preparing for a Future-Ready Classroom
Here are the core components I focus on to prepare students:

1. Unleash Creativity and Problem-Solving.
2. Weave in AI and Computational Thinking.
3. Cultivate Resilience and Adaptability.


AI Is Reshaping Learning Roles—Here’s How to Future-Proof Your Team — from onlinelearningconsortium.org by Jennifer Mathes, Ph.D., CEO, Online Learning Consortium; via Robert Gibson on LinkedIn

Culture matters here. Organizations that foster psychological safety—where experimentation is welcomed and mistakes are treated as learning—are making the most progress. When leaders model curiosity, share what they’re trying, and invite open dialogue, teams follow suit. Small tests become shared wins. Shared wins build momentum.

Career development must be part of this equation. As roles evolve, people will need pathways forward. Some will shift into new specialties. Others may leave familiar roles for entirely new ones. Making space for that evolution—through upskilling, mobility, and mentorship—shows your people that you’re not just investing in AI, you’re investing in them.

And above all, people need transparency. Teams don’t expect perfection. But they do need clarity. They need to understand what’s changing, why it matters, and how they’ll be supported through it. That kind of trust-building communication is the foundation for any successful change.

These shifts may play out differently across sectors—but the core leadership questions will likely be similar.

AI marks a turning point—not just for technology, but for how we prepare our people to lead through disruption and shape the future of learning.


.


 
 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 

Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.

Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.

  • AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
  • ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
  • ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.

Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.

The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.

 

Also see Meeker’s actual report at:

Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey



The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.

The details:

  • Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
  • The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
  • The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
  • Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.

Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.

 
© 2025 | Daniel Christian