News deserts hit new high and 50 million have limited access to local news, study finds — from medill.northwestern.edu
Federal funding cuts to public broadcasting may accelerate local news crisis

EVANSTON, ILL. — The number of local news deserts in the U.S. jumped to record levels this year as newspaper closures continued unabated, and funding cuts to public radio could worsen the problem in coming months, according to the Medill State of Local News Report 2025 released today.

While the local news crisis deepened overall, Medill researchers found cause for optimism — more than 300 local news startups have launched over the past five years, 80% of which were digital-only outlets.

For the fourth consecutive year, the Medill Local News Initiative at Northwestern University’s Medill School of Journalism, Media, Integrated Marketing Communications conducted a months-long, county-by-county survey of local news organizations to identify trends in the rapidly morphing local media landscape. Researchers looked at local newspapers, digital-only sites, ethnic media and public broadcasters.


               


How Local Newsrooms Are Rethinking Political Coverage — from adigaskell.org

For decades, election reporting in the U.S. has leaned heavily on the “horse race”—who’s up, who’s down, and who’s raising the most money. But new research from the University of Kansas suggests that this approach is starting to shift, thanks to a national training program aimed at helping journalists better engage with their communities.

The program, called Democracy SOS, encourages reporters to move beyond headline polls and campaign drama. Instead, it asks them to focus on the issues people care about and explain how those issues are being tackled. In other words: less spectacle, more substance.


Addendum on 11/13/25:

Why Losing Local Newspapers Costs More Than We Think — from adigaskell.org

So why can’t digital journalism fill the gap?
The researchers argue that online media isn’t a true replacement for local reporting. “If you’re in New York writing about San Francisco, you just don’t know the area,” they say. “You don’t have the context. You’re not there.”

Even local online reporters face pressure to chase clicks. “Every journalist now has a global audience,” they explain. “That means the stories that matter most—ones that require digging, patience, and a deep knowledge of the community—often get ignored.”

The takeaway: local newspapers may seem like an old-fashioned idea, but they play a key role in how communities function. And when they vanish, the costs go beyond the news.

 

There is no God Tier video model — from downes.ca by Stephen Downes

From DSC:
Stephen has some solid reflections and asks some excellent questions in this posting, including:

The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?


A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido
Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.

The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.


Claude for Life Sciences — from anthropic.com

Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.

We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.

Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.

To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.


AI as an access tool for neurodiverse and international staff — from timeshighereducation.com by Vanessa Mar-Molinero
Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers

Where AI helps without cutting academic corners
When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:

  • Drafting and tone calibration:
  • Language scaffolding:
  • Structure and templates: ..
  • Summarise and prioritise:
  • Accessibility by default:
  • Idea generation for pedagogy:
  • Translation and cultural mediation:

Beyond learning design: supporting pedagogical innovation in response to AI — from timeshighereducation.com by Charlotte von Essen
To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning


The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
A keynote at the AI and the Liberal Arts Symposium Conference

This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.

It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.

It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).

 

“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

Stanford Law Unveils liftlab, a Groundbreaking AI Initiative Focused on the Legal Profession’s Future — from law.stanford.edu

September 15, 2025 — Stanford, CA — Stanford Law School today announced the launch of the Legal Innovation through Frontier Technology Lab, or liftlab, to explore how artificial intelligence can reshape legal services—not just to make them faster and cheaper, but better and more widely accessible.

Led by Professor Julian Nyarko and Executive Director Megan Ma, liftlab is among the first academic efforts in legal AI to unite research, prototyping, and real-time collaboration with industry. While much of AI innovation in law has so far focused on streamlining routine tasks, liftlab is taking a broader and more ambitious approach. The goal is to tap AI’s potential to fundamentally change the way legal work serves society.


The divergence of law firms from lawyers — from jordanfurlong.substack.com by Jordan Furlong
LLMs’ absorption of legal task performance will drive law firms towards commoditized service hubs while raising lawyers to unique callings as trustworthy legal guides — so long as we do this right.

Generative AI is going to weaken and potentially dissolve that relationship. Law firms will become capable of generating output that can be sold to clients with no lawyer involvement at all.

Right now, it’s possible for an ordinary person to obtain from an LLM like ChatGPT-5 the performance of a legal task — the provision of legal analysis, the production of a legal instrument, the delivery of legal advice — that previously could only be acquired from a human lawyer.

I’m not saying a person should do that. The LLM’s output might be effective and reliable, or it might prove disastrously off-base. But many people are already using LLMs in this way, and in the absence of other accessible options for legal assistance, they will continue to do so.


Why legal professionals need purpose-built agentic AI — from legal.thomsonreuters.com by Frank Schilder with Thomson Reuters Labs

Highlights

  • Professional-grade agentic AI systems are architecturally distinct from consumer chatbots, utilizing domain-specific data and robust verification mechanisms to deliver the high accuracy and reliability essential for legal work, whereas consumer tools prioritize conversational flow using unvetted web data.
  • True agentic AI for legal professionals offers transparent, multi-agent workflows, integrates with authoritative legal databases for verification, and applies domain-specific reasoning to understand legal nuances, unlike traditional chatbots that lack this complexity and autonomy.
  • When evaluating legal AI, professionals should avoid solutions that lack workflow transparency, offer no human checkpoints for oversight, and cannot integrate with professional databases, ensuring the chosen tool enhances, rather than replaces, expert judgment.

How I Left Corporate Law to Become a Legal Tech Entrepreneur — from news.bloomberglaw.com by Adam Nguyen; behind a paywall

If you’re a lawyer wondering whether to take the leap into entrepreneurship, I understand the apprehension that comes with leaving a predictable path. Embracing the fear, uncertainty, challenges, and constant evolution integral to an entrepreneur’s journey has been worth it for me.


Lawyering In The Age Of AI: Why Artificial Intelligence Might Make Lawyers More Human — from abovethelaw.com by Lisa Lang and Joshua Horenstein
AI could rehumanize the legal profession.

AI is already adept at doing what law school trained us to do — identifying risks, spotting issues, and referencing precedent. What it’s not good at is nuance, trust, or judgment — skills that define great lawyering.

When AI handles some of the drudgery — like contract clause spotting and formatting — it gives us something precious back: time. That time forces lawyers to stop hiding behind legalese and impractical analysis. It allows — and even demands — that we communicate like leaders.

Imagine walking into a business meeting and, instead of delivering a 20-page memo, offering a single slide with a recommendation tied directly to company goals. That’s not just good lawyering; that’s leadership. And AI may be the catalyst that gets us there.

AI changes the game. When generative tools can translate clauses into plain English, the old value proposition of complexity begins to crumble. The playing field shifts — from who can analyze the most thoroughly to who can communicate the most clearly.

That’s not a threat. It’s an opportunity — one for lawyers to become better partners to the business by focusing on what matters most: sound judgment delivered in plain language.


 

AI firm Anthropic reaches landmark $1.5B copyright deal with book authors — from washingtonpost.com by Will Oremus; this is a gifted article
The authors hailed the settlement as a win for human creators after they alleged the company downloaded millions of books without permission.

Anthropic, the artificial intelligence company behind the popular chatbot Claude, will pay $1.5 billion to settle a class-action lawsuit brought by book publishers and authors, according to documents filed in federal court Friday.

The settlement allows Anthropic to avoid going to trial over claims that it violated copyrights by downloading millions of books without permission and storing digital copies of them. The company will not admit wrongdoing.

 

I Teach Creative Writing. This Is What A.I. Is Doing to Students. — from nytimes.com by Meghan O’Rourke; this is a gifted article.

We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for.

From DSC:
I almost feel like Meghan should right the words “this week” or “this month” after the above sentence. Whew! Things are moving fast.

For example, we’re now starting to see more agents hitting the scene — software that can DO things. But that can open up a can of worms too. 

Students know the ground has shifted — and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one.ce 

From DSC:
Universities and colleges have little choice but to integrate AI into their programs and offerings. There’s enough pressure on institutions of traditional higher education to prove their worth/value. Students and their families want solid ROI’s. Students know that they are going to need AI-related skills (see the link immediately below for example), or they are going to be left out of the competitive job search process.

A relevant resource here:

 

Teach business students to write like executives — from timeshighereducation.com by José Ignacio Sordo Galarza
Many business students struggle to communicate with impact. Teach them to pitch ideas on a single page to build clarity, confidence and work-ready communication skills

Many undergraduate business students transition into the workforce equipped with communication habits that, while effective in academic settings, prove ineffective in professional environments. At university, students are trained to write for professors, not executives. This becomes problematic in the workplace where lengthy reports and academic jargon often obscure rather than clarify intent. Employers seek ideas they can absorb in seconds. This is where the one-pager – a single-page, high-impact document that helps students develop clarity of thought, concise expression and strategic communication – proves effective.


Also from Times Higher Education, see:


Is the dissertation dead? If so, what are the alternatives? — from timeshighereducation.com by Rushana Khusainova, Sarah Sholl, & Patrick Harte
Dissertation alternatives, such as capstone projects and applied group-based projects, could better prepare graduates for their future careers. Discover what these might look like

The traditional dissertation, a longstanding pillar of higher education, is facing increasing scrutiny. Concerns about its relevance to contemporary career paths, limitations in fostering practical skills and the changing nature of knowledge production in the GenAI age have fuelled discussions about its continued efficacy. So, is the dissertation dead?

The dissertation is facing a number of challenges. It can be perceived as having little relevance to career aspirations in increasingly competitive job markets. According to The Future of Jobs Report 2025 by the World Economic Forum, employers demand and indeed prioritise skills such as collaborative problem-solving in diverse and complex contexts, which a dissertation might not demonstrate.

 

 

On blogging (again) — from by Martin Weller

I also pondered what functions blogging has provided for me over the years.

  • Continuity – as an individual you persist across multiple organisations, roles and jobs. Although I stayed in one institution, I had many roles and the blog wasn’t associated with one specific project. Now I have left it continues.
  • Holistic – you can blog about one topic, but over time I think some personality will creep in. You are not just one thing, you have a personal life, tastes, interests etc which will all feed into what you do. A blog allows this more rounded representation.
  • Experimentation – there is relatively low cost and risk for much of it (this may not be the case for many people online, we need to acknowledge), so you can try things, and if they don’t work, so what? Also you can try formats that conventional outlets might not be appropriate for.
  • Development – the blog has been both an intentional and unintentional vehicle for working up ideas, documenting the process and getting feedback, which have led to more substantial outputs, such as books, project proposals and papers. Most importantly though it has been the means through which I have continually developed writing.
  • Connecting – particularly in those halcyon early days, it was a good way of finding others, working on ideas together, sharing something of yourself. A lot of my career related personal friendships have resulted from blogging.
  • Publicity – I became at one point (the OU crisis of 2018) something of a public voice of the OU, and have often used the blog for projects such as GO-GN

That’s not a bad return for a lil’ ol’ blog. I couldn’t say the same for academic journals.

 


Tech check: Innovation in motion: How AI is rewiring L&D workflows — from chieflearningofficer.com by Gabrielle Pike
AI isn’t here to replace us. It’s here to level us up.

For today’s chief learning officer, the days of just rolling out compliance training are long gone. In 2025, learning and development leaders are architects of innovation, crafting ecosystems that are agile, automated and AI-infused. This quarter’s Tech Check invites us to pause, assess and get strategic about where tech is taking us. Because the goal isn’t more tools—it’s smarter, more human learning systems that scale with the business.

Sections include:

  • The state of AI in L&D: Hype vs. reality
  • AI in design: From static content to dynamic experiences
  • AI in development: Redefining production workflows
  • Strategic questions CLOs should be asking
  • Future forward: What’s next?
  • Closing thought

American Federation of Teachers (AFT) to Launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic and United Federation of Teachers — from aft.org

NEW YORK – The AFT, alongside the United Federation of Teachers and lead partner Microsoft Corp., founding partner OpenAI, and Anthropic, announced the launch of the National Academy for AI Instruction today. The groundbreaking $23 million education initiative will provide access to free AI training and curriculum for all 1.8 million members of the AFT, starting with K-12 educators. It will be based at a state-of-the-art bricks-and-mortar Manhattan facility designed to transform how artificial intelligence is taught and integrated into classrooms across the United States.

The academy will help address the gap in structured, accessible AI training and provide a national model for AI-integrated curriculum and teaching that puts educators in the driver’s seat.


Students Are Anxious about the Future with A.I. Their Parents Are, Too. — from educationnext.org by Michael B. Horn
The fast-growing technology is pushing families to rethink the value of college

In an era when the college-going rate of high school graduates has dropped from an all-time high of 70 percent in 2016 to roughly 62 percent now, AI seems to be heightening the anxieties about the value of college.

According to the survey, two-thirds of parents say AI is impacting their view of the value of college. Thirty-seven percent of parents indicate they are now scrutinizing college’s “career-placement outcomes”; 36 percent say they are looking at a college’s “AI-skills curriculum,” while 35 percent respond that a “human-skills emphasis” is important to them.

This echoes what I increasingly hear from college leadership: Parents and students demand to see a difference between what they are getting from a college and what they could be “learning from AI.”


This next item on LinkedIn is compliments of Ray Schroeder:



How to Prepare Students for a Fast-Moving (AI)World — from rdene915.com by Dr. Rachelle Dené Poth

Preparing for a Future-Ready Classroom
Here are the core components I focus on to prepare students:

1. Unleash Creativity and Problem-Solving.
2. Weave in AI and Computational Thinking.
3. Cultivate Resilience and Adaptability.


AI Is Reshaping Learning Roles—Here’s How to Future-Proof Your Team — from onlinelearningconsortium.org by Jennifer Mathes, Ph.D., CEO, Online Learning Consortium; via Robert Gibson on LinkedIn

Culture matters here. Organizations that foster psychological safety—where experimentation is welcomed and mistakes are treated as learning—are making the most progress. When leaders model curiosity, share what they’re trying, and invite open dialogue, teams follow suit. Small tests become shared wins. Shared wins build momentum.

Career development must be part of this equation. As roles evolve, people will need pathways forward. Some will shift into new specialties. Others may leave familiar roles for entirely new ones. Making space for that evolution—through upskilling, mobility, and mentorship—shows your people that you’re not just investing in AI, you’re investing in them.

And above all, people need transparency. Teams don’t expect perfection. But they do need clarity. They need to understand what’s changing, why it matters, and how they’ll be supported through it. That kind of trust-building communication is the foundation for any successful change.

These shifts may play out differently across sectors—but the core leadership questions will likely be similar.

AI marks a turning point—not just for technology, but for how we prepare our people to lead through disruption and shape the future of learning.


.


 
 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 

Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.

Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.

  • AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
  • ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
  • ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.

Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.

The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.

 

Also see Meeker’s actual report at:

Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey



The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.

The details:

  • Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
  • The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
  • The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
  • Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.

Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.

 

Opinions | This Baltimore program shows how to fight generational poverty – from washingtonpost.com by Leana S. Wen; this is a gifted article
How one grassroots organization is teaching young people leadership skills and giving them hope.

She recognized their desperation and felt called to return and use what she had learned to help them realize a different future. So she set up an organization, HeartSmiles, to do just that — one young person at a time.

Holifield’s experience is one that city officials and public health workers can learn from. If they want to disrupt the generational cycle of poverty, trauma and hopelessness that afflicts so many communities, a good place to focus their efforts is children.

How can communities overcome inertia and resignation? Holifield’s organization starts with two core interventions. The first is career and leadership development. Children as young as 8 go to the HeartSmiles center to participate in facilitated sessions on youth entrepreneurship, budgeting and conflict resolution. Those who want to explore certain career paths are matched with professionals in these fields.

The second part of her vision is youth-led mentorship, which involves pairing young people with those not much older than they are. 


Also relevant/see:

Lost boys, trapped men, and the role of lifers in prison education — from college-inside.beehiiv.com by Charlotte West

This week, we’re publishing Part 2 of a Q&A with Erik Maloney, a lifer in Arizona, and Kevin Wright, a criminal justice professor at Arizona State University. They co-authored Imprisoned Minds, a book about trauma and healing published in December 2024, over the course of seven years. Check out Part 1 of the Q&A.

West: The fact that you created your own curriculum to accompany the book makes me think about the role of lifers in creating educational opportunities in prisons. What do you see as the role of lifers in filling some of these gaps?

Maloney
: I’ve said for years that lifers are so underutilized in prison. It’s all about punishment for what you’re in for, and [the prison system] overlooks us as a resource. We are people who, if allowed to be educated properly, can teach courses indefinitely while also being a role model for those with shorter sentences. This gives the lifer meaning and purpose to do good again. He serves as a mentor, whether he likes it or not, to [those] people coming into the prisons. When they see him doing well, it inspires others to want to do well.

But if it’s all about punishment, and a person has no meaning and no purpose in life, then all they have is hopelessness. With hopelessness comes despair, and with despair, you have rampant drug and alcohol abuse in prison, and violence stems from that.

 

It’s the end of work as we knew it
and I feel…

powerless to fight the technology that we pioneered
nostalgic for a world that moved on without us
after decades of paying our dues
for a payday that never came
…so yeah
not exactly fine.


The Gen X Career Meltdown — from nytimes.com by Steeven Kurutz (DSC: This is a gifted article for you)
Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.

If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.

“I am having conversations every day with people whose careers are sort of over,” said Chris Wilcha, a 53-year-old film and TV director in Los Angeles.

Talk with people in their late 40s and 50s who once imagined they would be able to achieve great heights — or at least a solid career while flexing their creative muscles — and you are likely to hear about the photographer whose work dried up, the designer who can’t get hired or the magazine journalist who isn’t doing much of anything.

In the wake of the influencers comes another threat, artificial intelligence, which seems likely to replace many of the remaining Gen X copywriters, photographers and designers. By 2030, ad agencies in the United States will lose 32,000 jobs, or 7.5 percent of the industry’s work force, to the technology, according to the research firm Forrester.


From DSC:
This article reminds me of how tough it is to navigate change in our lives. For me, it was often due to the fact that I was working with technologies. Being a technologist can be difficult, especially as one gets older and faces age discrimination in a variety of industries. You need to pick the right technologies and the directions that will last (for me it was email, videoconferencing, the Internet, online-based education/training, discovering/implementing instructional technologies, and becoming a futurist).

For you younger folks out there — especially students within K-16 — aim to develop a perspective and a skillset that is all about adapting to change. You will likely need to reinvent yourself and/or pick up new skills over your working years. You are most assuredly required to be a lifelong learner now. That’s why I have been pushing for school systems to be more concerned with providing more choice and control to students — so that students actually like school and enjoy learning about new things.


 

 

Drive Continuous Learning: AI Integrates Work & Training — from learningguild.com by George Hanshaw

Imagine with me for a moment: Training is no longer confined to scheduled sessions in a classroom, an online module or even a microlearning you click to activate during your workflow. Imagine training being delivered because the system senses what you are doing and provides instructions and job aids without you having to take an action.

The rapid evolution of artificial intelligence (AI) and wearable technology has made it easier than ever to seamlessly integrate learning directly into the workflow. Smart glasses, earpieces, and other advanced devices are redefining how employees gain knowledge and skills by delivering microlearning moments precisely when and where they are needed.

AI plays a crucial role in this transformation by sensing the optimal moment to deliver the training through augmented reality (AR).



These Schools Are Banding Together to Make Better Use of AI in Education — from edsurge.com by Emily Tate Sullivan

Kennelly and Geraffo are part of a small team at their school in Denver, DSST: College View High School, that is participating in the School Teams AI Collaborative, a year-long pilot initiative in which more than 80 educators from 19 traditional public and charter schools across the country are experimenting with and evaluating AI-enabled instruction to improve teaching and learning.

The goal is for some of AI’s earliest adopters in education to band together, share ideas and eventually help lead the way on what they and their colleagues around the U.S. could do with the emerging technology.

“Pretty early on we thought it was going to be a massive failure,” says Kennelly of last semester’s project. “But it became a huge hit. Students loved it. They were like, ‘I ran to second period to build this thing.’”



Transactional vs. Conversational Visions of Generative AI in Teaching — from elmartinsen.substack.com by Eric Lars Martinsen
AI as a Printer, or AI as a Thought Partner

As writing instructors, we have a choice in how we frame AI for our students. I invite you to:

  1. Experiment with AI as a conversation partner yourself before introducing it to students
  2. Design assignments that leverage AI’s strengths as a thought partner rather than trying to “AI-proof” your existing assignments
  3. Explicitly teach students how to engage in productive dialogue with AI—how to ask good questions, challenge AI’s assumptions, and use it to refine rather than replace their thinking
  4. Share your experiences, both positive and negative, with colleagues to build our collective understanding of effective AI integration

 
© 2025 | Daniel Christian