Why Parents Aren’t Reading to Kids, and What It Means for Young Students — from the74million.org by Jessika Harkay
A recent study found less than half of children are read to daily. The consequences are serious for early learners who enter school unprepared.

For children not getting the benefits of being read to at home, the opportunity gap has widened, with those young students entering school unprepared compared to those who have been read to.

“The gap really begins very, very early on. I think we underestimate how large a gap we’re already seeing in kindergarten,” said Susan Neuman, professor of childhood and literacy education at New York University, adding she recently visited a New York City kindergarten classroom and saw some children who only knew two letters compared to others who were prepared to read phrases.

A 2019 Ohio State University study found a 5-year-old child who is read to daily would be exposed to nearly 300,000 more words than one who isn’t read to regularly.

 

Beyond ChatGPT: Why In-House Counsel Need Purpose Built AI (Cecilia Ziniti, CEO – GC AI) — from tlpodcast.com

This episode features a conversation with Cecilia Ziniti, Co-Founder and CEO of GC.AI. Cecilia traces her career from the early days of the internet to founding an AI-driven legal platform for in-house counsel.

Cecilia shares her journey, starting as a paralegal at Yahoo in the early 2000s, working on nascent legal issues related to the internet. She discusses her time at Morrison & Foerster and her role at Amazon, where she was an early member of the Alexa team, gaining deep insight into AI’s potential before the rise of modern large language models (LLMs).

The core discussion centers on the creation of GC AI, a legal AI tool specifically designed for in-house counsel. Cecilia explains why general LLMs like ChatGPT are insufficient for professional legal work—lacking proper citation, context, and security/privilege protections. She highlights the app’s features, including enhanced document analysis (RAG implementation), a Word Add-in, and workflow-based playbooks to deliver accurate, client-forward legal analysis. The episode also touches on the current state of legal tech, the growing trend of bringing legal work in-house, and the potential for AI to shift the dynamics of the billable hour.

 

Agents, robots, and us: Skill partnerships in the age of AI — from mckinsey.com by Lareina Yee, Anu Madgavkar, Sven Smit, Alexis Krivkovich, Michael Chui, María Jesús Ramírez, and Diego Castresana
AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines.

At a glance

  • Work in the future will be a partnership between people, agents, and robots—all powered by AI. …
  • Most human skills will endure, though they will be applied differently. …
  • Our new Skill Change Index shows which skills will be most and least exposed to automation in the next five years….
  • Demand for AI fluency—the ability to use and manage AI tools—has grown sevenfold in two years…
  • By 2030, about $2.9 trillion of economic value could be unlocked in the United States…

Also related/see:



State of AI: December 2025 newsletter — from nathanbenaich.substack.com by Nathan Benaich
What you’ve got to know in AI from the last 4 weeks.

Welcome to the latest issue of the State of AI, an editorialized newsletter that covers the key developments in AI policy, research, industry, and start-ups over the last month.


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 

Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage
As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.

Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.


DEEP DIVE: The AI user interface of the future = Voice — from theneurondaily.com by Grant Harvey
PLUS: Gemini 3.0 and Microsoft’s new voice features

Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.

To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.

The Great Convergence: Why Voice Is Having Its Moment
Three massive shifts just collided to make voice interfaces inevitable.

    1. First, speech recognition stopped being terrible. …
    2. Second, our devices got ears everywhere. …
    3. Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …

Introducing group chats in ChatGPT — from openai.com
Collaborate with others, and ChatGPT, in the same conversation.

Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.

Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.

Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.




 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 

I analyzed 180M jobs to see what jobs AI is actually replacing today — from bloomberry.com by Henley Wing Chiu; via Kim Isenberg

I analyzed nearly 180 million global job postings from January 2023 to October 2025, using data from Revealera, a provider of jobs data. While I acknowledge not all job postings result in a hire, and some are ‘ghost jobs’, since I was comparing the relative growth in job titles, this didn’t seem like a big issue to me.

I simply wanted to know which specific job titles declined or grew the most in 2025, compared to 2024. Because those were likely to be ones that AI is impacting the most.

Key Sections


Also from Kim Isenberg, see:


 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 

…the above posting links to:

Higher Ed Is Sleepwalking Toward Obsolescence— And AI Won’t Be the Cause, Just the Accelerant — from substack.com by Steven Mintz
AI Has Exposed Higher Ed’s Hollow Core — The University Must Reinvent Itself or Fade

It begins with a basic reversal of mindset: Stop treating AI as a threat to be policed. Start treating it as the accelerant that finally forces us to build the education we should have created decades ago.

A serious institutional response would demand — at minimum — six structural commitments:

  • Make high-intensity human learning the norm.  …
  • Put active learning at the center, not the margins.  …
  • Replace content transmission with a focus on process.  …
  • Mainstream high-impact practices — stop hoarding them for honors students.  …
  • Redesign assessment to make learning undeniable.  …

And above all: Instructional design can no longer be a private hobby.


Teaching with AI: From Prohibition to Partnership for Critical Thinking — from facultyfocus.com by Michael Kiener, PhD, CRC

How to Integrate AI Developmentally into Your Courses

  • Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
  • Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
  • Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
  • Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.

Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.




80 per cent of young people in the UK are using AI for their schoolwork — from aipioneers.org by Graham Attwell

What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.


 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

At the most recent NVIDIA GTC conference, held in Washington, D.C. in October 2025, CEO Jensen Huang announced major developments emphasizing the use of AI to “reindustrialize America”. This included new partnerships, expansion of the Blackwell architecture, and advancements in AI factories for robotics and science. The spring 2024 GTC conference, meanwhile, was headlined by the launch of the Blackwell GPU and significant updates to the Omniverse and robotics platforms.

During the keynote in D.C., Jensen Huang focused on American AI leadership and announced several key initiatives.

  • Massive Blackwell GPU deployments: The company announced an expansion of its Blackwell GPU architecture, which first launched in March 2024. Reportedly, the company has already shipped 6 million Blackwell chips, with orders for 14 million more by the end of 2025.
  • AI supercomputers for science: In partnership with the Department of Energy and Oracle, NVIDIA is building new AI supercomputers at Argonne National Laboratory. The largest, named “Solstice,” will deploy 100,000 Blackwell GPUs.
  • 6G infrastructure: NVIDIA announced a partnership with Nokia to develop a U.S.-based, AI-native 6G technology stack.
  • AI factories for robotics: A new AI Factory Research Center in Virginia will use NVIDIA’s technology for building massive-scale data centers for AI.
  • Autonomous robotaxis: The company’s self-driving technology, already adopted by several carmakers, will be used by Uber for an autonomous fleet of 100,000 robotaxis starting in 2027.


Nvidia and Uber team up to develop network of self-driving cars — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) and Uber (UBER) on Tuesday revealed that they’re working to put together what they say will be the world’s largest network of Level 4-ready autonomous cars.

The duo will build out 100,000 vehicles beginning in 2027 using Nvidia’s Drive AGX Hyperion 10 platform and Drive AV software.


Nvidia stock hits all-time high, nears $5 trillion market cap after slew of updates at GTC event — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) stock on Tuesday rose 5% to close at a record high after the company announced a slew of product updates, partnerships, and investment initiatives at its GTC event in Washington, D.C., putting it on the doorstep of becoming the first company in history with a market value above $5 trillion.

The AI chip giant is approaching the threshold — settling at a market cap of $4.89 trillion on Tuesday — just months after becoming the first to close above $4 trillion in July.


 

There is no God Tier video model — from downes.ca by Stephen Downes

From DSC:
Stephen has some solid reflections and asks some excellent questions in this posting, including:

The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?


A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido
Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.

The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.


Claude for Life Sciences — from anthropic.com

Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.

We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.

Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.

To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.


AI as an access tool for neurodiverse and international staff — from timeshighereducation.com by Vanessa Mar-Molinero
Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers

Where AI helps without cutting academic corners
When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:

  • Drafting and tone calibration:
  • Language scaffolding:
  • Structure and templates: ..
  • Summarise and prioritise:
  • Accessibility by default:
  • Idea generation for pedagogy:
  • Translation and cultural mediation:

Beyond learning design: supporting pedagogical innovation in response to AI — from timeshighereducation.com by Charlotte von Essen
To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning


The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
A keynote at the AI and the Liberal Arts Symposium Conference

This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.

It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.

It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).

 
© 2025 | Daniel Christian