I need to be honest with you. I’ve been running experiments this week with Claude Code and Opus 4.6, and we have reached the precipice in the collapse of time required to produce high-quality text-based ID outputs.
This includes performance consulting reports, learning needs analyses, action mapping, scripts, storyboards, facilitator guides, rubrics, and technical specs.
I just mapped the entire performance consulting process into a multimodal AI integration architecture (diagram image). Every phase. Entry and contracting. Performance analysis. Cause analysis. Solution design. Implementation. Evaluation. Thirty files. System specifications for each. The next step is to vet out each “skill” with an expert performance consultant.
Then I attempted a learning output: an 8-module course built with a cognitive scaffold that moves beyond content delivery to facilitate deliberate practice, meaning-making, and guided reflection within the learner’s own context.
AI adaptive learning can adapt learning in real-time. These tools have the potential to provide a more personalized learning experience, but only if used properly.
The California State University system uses ChatGPT Edu (OpenAI, 2025). Students use it for AI-assisted tutoring, study aids, and writing support. These resources provide 24/7 availability of subject-matter expertise tailored to students’ learning needs. It is not a replacement for professors. Rather, it extends the reach of mentorship by reducing access barriers.
However, we must proceed with intellectual humility and ethical responsibility. Even though AI can customize messages, it cannot replace the encouragement of a teacher or professor, or the social and emotional aspects of learning. It’s at the intersection of humanistic values and knowledge development that education must find its balance.
At CES 2026, Everything Is AI. What Matters Is How You Use It — from wired.com by Boone Ashworth Integrated chatbots and built-in machine intelligence are no longer standout features in consumer tech. If companies want to win in the AI era, they’ve got to hone the user experience.
Beyond Wearables
Right now, AI is on your face and arms—smart glasses and smart watches—but this year will see it proliferate further into products like earbuds, headphones, and smart clothing.
Health tech will see an influx of AI features too, as companies aim to use AI to monitor biometric data from wearables like rings and wristbands. Heath sensors will also continue to show up in newer places like toilets, bath mats, and brassieres.
The smart home will continue to be bolstered by machine intelligence, with more products that can listen, see, and understand what’s happening in your living space. Familiar candidates for AI-powered upgrades like smart vacuums and security cameras will be joined by surprising AI bedfellows like refrigerators and garage door openers.
After a year of bot battles, one thing stands out: There is no single best AI. The smartest way to use chatbots today is to pick different tools for different jobs — and not assume one bot can do it all.
Some enterprise platforms now support cross-agent communication and integration with ecosystems maintained by companies like Microsoft, NVIDIA, Google, and Oracle. These cross-platform data fabrics break down silos and turn isolated AI pilots into enterprise-wide services. The result is an IT backbone that not only automates but also collaborates for continuous learning, diagnostics, and system optimization in real time.
It’s difficult to think of any single company that had a bigger impact on Wall Street and the AI trade in 2025 than Nvidia (NVDA).
…
Nvidia’s revenue soared in 2025, bringing in $187.1 billion, and its market capitalization continued to climb, briefly eclipsing the $5 trillion mark before settling back in the $4 trillion range.
There were plenty of major highs and deep lows throughout the year, but these 15 were among the biggest moments of Nvidia’s 2025.
You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD On Custom Instructions with GenAI Tools….
I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.
I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!
I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.
The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.
Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.
A new source of legal intelligence has entered the legal sector.
…
Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.
The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.
Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”
…
The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.
The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.
Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.
As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.
The above item was mentioned by Grant Harvey out at The Neuron in the following posting:
Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.
The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.
The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.
Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.
TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.
Nvidia has officially become the first company in history to cross the $5 trillion market cap, cementing its position as the undisputed leader of the AI era. Just three months ago, the chipmaker hit $4 trillion; it’s already added another trillion since.
Nvidia market cap milestones:
Jan 2020: $144 billion
May 2023: $1 trillion
Feb 2024: $2 trillion
Jun 2024: $3 trillion
Jul 2025: $4 trillion
Oct 2025: $5 trillion
The above posting linked to:
Nvidia becomes first public company worth $5 trillion — from techcrunch.com by Ivan Mehta The biggest beneficiary of the ongoing AI boom, Nvidia has become the first public company to pass the $5 trillion market cap milestone.
My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.
Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.
With this in mind, I put together 10 use cases for Atlas for you to try for yourself.
…
6. Retrieval Practice
What: Pulling information from memory drives retention better than re-reading. Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017). Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.” Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.
From DSC: A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.
The Bull and Bear Case For the AI Bubble, Explained — from theneuron.ai by Grant Harvey AI is both a genuine technological revolution and a massive financial bubble, and the defining question is whether miraculous progress can outrun the catastrophic, multi-trillion-dollar cost required to achieve it.
This sets the stage for the defining conflict of our technological era. The narrative has split into two irreconcilable realities. In one, championed by bulls like venture capitalist Marc Andreessen and NVIDIA CEO Jensen Huang, we are at the dawn of “computer industry V2”—a platform shift so profound it will unlock unprecedented productivity and reshape civilization.
In the other, detailed by macro investors like Julien Garran and forensic bears like writer Ed Zitron, AI is a historically massive, circular, debt-fueled mania built on hype, propped up by a handful of insiders, and destined for a collapse that will make past busts look quaint.
This is a multi-layered conflict playing out across public stock markets, the private venture ecosystem, and the fundamental unit economics of the technology itself. To understand the future, and whether it holds a revolution, a ruinous crash, or a complex mixture of both, we must dissect every layer of the argument, from the historical parallels to the hard financial data and the technological critiques that question the very foundation of the boom.
From DSC:
I second what Grant said at the beginning of his analysis:
**The following is shared for educational purposes and is not intended to be financial advice; do your own research!
But I post this because Grant provides both sides of the argument very well.
In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.
Sam Altman kicks off DevDay 2025 with a keynote to explore ideas that will challenge how you think about building. Join us for announcements, live demos, and a vision of how developers are reshaping the future with AI.
Commentary from The Rundown AI:
Why it matters: OpenAI is turning ChatGPT into a do-it-all platform that might eventually act like a browser in itself, with users simply calling on the website/app they need and interacting directly within a conversation instead of navigating manually. The AgentKit will also compete and disrupt competitors like Zapier, n8n, Lindy, and others.
The 4 Rs framework Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.
The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.
… Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.
OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.
What Sora 2 can do
Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
Follow intricate multi-shot instructions while maintaining world state across scenes.
Create realistic background soundscapes, dialogue, and sound effects automatically.
Insert YOU into any video after a quick one-time recording (they call this “cameos”).
The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…
Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips
Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.
Also along the lines of creating digital video, see:
What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.
The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.
For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.
Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product.With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.
A growing number of U.S. law schools are now requiring students to train in artificial intelligence, marking a shift from optional electives to essential curriculum components. What was once treated as a “nice-to-have” skill is fast becoming integral as the legal profession adapts to the realities of AI tools.
From Experimentation to Obligation
Until recently, most law schools relegated AI instruction to upper-level electives or let individual professors decide whether to incorporate generative AI into their teaching. Now, however, at least eight law schools require incoming students—especially in their first year—to undergo training in AI, either during orientation, in legal research and writing classes, or via mandatory standalone courses.
Some of the institutions pioneering the shift include Fordham University, Arizona State University, Stetson University, Suffolk University, Washington University in St. Louis, Case Western, and the University of San Francisco.
There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.
…
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work.The results were compelling….
TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
The 10 Most Popular AI Chatbots For Educators — from techlearning.com by Erik Ofgang Educators don’t need to use each of these chatbots, but it pays to be generally aware of the most popular AI tools
I’ve spent time testing many of these AI chatbots for potential uses and abuses in my own classes, so here’s a quick look at each of the top 10 most popular AI chatbots, and what educators should know about each. If you’re looking for more detail on a specific chatbot, click the link, as either I or other Tech & Learning writers have done deeper dives on all these tools.
Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.
…
As experts in educational technology, digital literacy, and organizational change, we argue that higher education must seize this moment to rethink not just how we use AI, but how we structure and deliver learning altogether.
Over the past decade, microschools — experimental small schools that often have mixed-age classrooms — have expanded.
…
Some superintendents have touted the promise of microschools as a means for public schools to better serve their communities’ needs while still keeping children enrolled in the district. But under a federal administration that’s trying to dismantle public education and boost homeschool options, others have critiqued poor oversight and a lack of information for assessing these models.
Microschools offer a potential avenue to bring innovative, modern experiences to rural areas, argues Keith Parker, superintendent of Elizabeth City-Pasquotank Public Schools.
Imagining Teaching with AI Agents… — from michellekassorla.substack.com by Michelle Kassorla Teaching with AI is only one step toward educational change, what’s next?
More than two years ago I started teaching with AI in my classes. At first I taught against AI, then I taught with AI, and now I am moving into unknown territory: agents. I played with Manus and n8n and some other agents, but I really never got excited about them. They seemed more trouble than they were worth. It seemed they were no more than an AI taskbot overseeing some other AI bots, and that they weren’t truly collaborating. Now, I’m looking at Perplexity’s Comet browser and their AI agent and I’m starting to get ideas for what the future of education might hold.
I have written several times about the dangers of AI agents and how they fundamentally challenge our systems, especially online education. I know there is no way that we can effectively stop them–maybe slow them a little, but definitely not stop them. I am already seeing calls to block and ban agents–just like I saw (and still see) calls to block and ban AI–but the truth is they are the future of work and, therefore, the future of education.
So, yes! This is my next challenge: teaching with AI agents. I want to explore this idea, and as I started thinking about it, I got more and more excited. But let me back up a bit. What is an agent and how is it different than Generative AI or a bot?
Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.
Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.
This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”
… Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…
AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…
SKYLAND | AI Short Film Fantasy
Skyland is an AI-powered fantasy short film that takes you on a breathtaking journey with Aeryn Solveth and his loyal flying fish. From soaring above the futuristic city of Cybryne to returning to his homeland of Eryndor, Aeryn’s adventure is… https://t.co/Lz6UUxQvExpic.twitter.com/cYXs9nwTX3
What’s wild is that Dinda used a cocktail of AI tools (Adobe Firefly, MidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.
The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.
Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.
The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”
A new way to get things done with your AI browsing assistant Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.
Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.
We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.
Well now, as the corporate learning market shifts to AI, (read the details in our study “The Revolution in Corporate Learning” ), Workday can jump ahead. This is because the $400 billion corporate training market is moving quickly to an AI-Native dynamic content approach (witness OpenAI’s launch of in-line learning in its chatbot). We’re just finishing a year-long study of this space and our detailed report and maturity model will be out in Q4. .
.
With Sana, and a few other AI-native vendors (Uplimit, Arist, Disperz, Docebo), companies can upload audios, videos, documents, and even interviews with experts and the system build learning programs in minutes. We use Sana for Galileo Learn (our AI-powered learning academy for Leadership and HR), and we now have 750+ courses and can build new programs in days instead of months.
And there’s more; this type of system gives every employee a personalized, chat-based experience to learn.