Agents, robots, and us: Skill partnerships in the age of AI — from mckinsey.com by Lareina Yee, Anu Madgavkar, Sven Smit, Alexis Krivkovich, Michael Chui, María Jesús Ramírez, and Diego Castresana AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines.
At a glance
Work in the future will be a partnership between people, agents, and robots—all powered by AI. …
Most human skills will endure, though they will be applied differently. …
Our new Skill Change Index shows which skills will be most and least exposed to automation in the next five years….
Demand for AI fluency—the ability to use and manage AI tools—has grown sevenfold in two years…
By 2030, about $2.9 trillion of economic value could be unlocked in the United States…
Also related/see:
The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com Almost all survey respondents say their organizations are using AI, and many have begun to use AI agents. But most are still in the early stages of scaling AI and capturing enterprise-level value.
Welcome to the latest issue of the State of AI, an editorialized newsletter that covers the key developments in AI policy, research, industry, and start-ups over the last month.
So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.
As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.
So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):
What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.
For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.
Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.
An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.
… Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.
The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.
Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.
From DSC : This should be important as we look at learning-related applications for AI.
For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?
Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.
Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.
Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.
To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.
The Great Convergence: Why Voice Is Having Its Moment Three massive shifts just collided to make voice interfaces inevitable.
First, speech recognition stopped being terrible. …
Second, our devices got ears everywhere. …
Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …
Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.
Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.
Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD On Custom Instructions with GenAI Tools….
I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.
I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!
I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.
I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.
A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.
What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity.
.
From DSC: Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.”
While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living.
And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it:
Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.
Those stories are just beginning…they’re not close to being over.
So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?
…
Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.
The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.
But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.
The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.
Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.
A new source of legal intelligence has entered the legal sector.
…
Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.
The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.
Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”
…
The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.
The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.
Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.
As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.
The above item was mentioned by Grant Harvey out at The Neuron in the following posting:
Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.
The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.
The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.
Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.
TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.
The Bull and Bear Case For the AI Bubble, Explained — from theneuron.ai by Grant Harvey AI is both a genuine technological revolution and a massive financial bubble, and the defining question is whether miraculous progress can outrun the catastrophic, multi-trillion-dollar cost required to achieve it.
This sets the stage for the defining conflict of our technological era. The narrative has split into two irreconcilable realities. In one, championed by bulls like venture capitalist Marc Andreessen and NVIDIA CEO Jensen Huang, we are at the dawn of “computer industry V2”—a platform shift so profound it will unlock unprecedented productivity and reshape civilization.
In the other, detailed by macro investors like Julien Garran and forensic bears like writer Ed Zitron, AI is a historically massive, circular, debt-fueled mania built on hype, propped up by a handful of insiders, and destined for a collapse that will make past busts look quaint.
This is a multi-layered conflict playing out across public stock markets, the private venture ecosystem, and the fundamental unit economics of the technology itself. To understand the future, and whether it holds a revolution, a ruinous crash, or a complex mixture of both, we must dissect every layer of the argument, from the historical parallels to the hard financial data and the technological critiques that question the very foundation of the boom.
From DSC:
I second what Grant said at the beginning of his analysis:
**The following is shared for educational purposes and is not intended to be financial advice; do your own research!
But I post this because Grant provides both sides of the argument very well.
Today, as part of our research collaboration with Yale University, we’re releasing Cell2Sentence-Scale 27B (C2S-Scale), a new 27 billion parameter foundation model designed to understand the language of individual cells. Built on the Gemma family of open models, C2S-Scale represents a new frontier in single-cell analysis.
This announcement marks a milestone for AI in science. C2S-Scale generated a novel hypothesis about cancer cellular behavior and we have since confirmed its prediction with experimental validation in living cells. This discovery reveals a promising new pathway for developing therapies to fight cancer.
In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.
As the healthcare world progresses from one focused on diagnostics to prognostics, the rise of agentic artificial intelligence (AI) is transforming medical technology into learning systems, a Google Cloud executive has said.
…
In a blog post, Shweta Maniar, Google Cloud’s global director of healthcare & life sciences, stated that the advancement of AI technology and healthcare ecosystems is drawing down on operational complexity for device companies and helping specialised expertise to reach more patients.
By embedding technology into medical devices, they are becoming more like pre-emptive learning systems, Shweta said.
“Looking forward, implants with monitoring capabilities will be able to track how your body reacts, how you heal, and when it’s safe to return to activities like running or surfing,” she explained.
“More importantly, they will gather data that improves the next version of that device for every future patient.”
The 4 Rs framework Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.
The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.
… Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.
OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.
What Sora 2 can do
Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
Follow intricate multi-shot instructions while maintaining world state across scenes.
Create realistic background soundscapes, dialogue, and sound effects automatically.
Insert YOU into any video after a quick one-time recording (they call this “cameos”).
The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…
Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips
Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.
Also along the lines of creating digital video, see:
What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.
The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.
For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.
Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product.With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.
A growing number of U.S. law schools are now requiring students to train in artificial intelligence, marking a shift from optional electives to essential curriculum components. What was once treated as a “nice-to-have” skill is fast becoming integral as the legal profession adapts to the realities of AI tools.
From Experimentation to Obligation
Until recently, most law schools relegated AI instruction to upper-level electives or let individual professors decide whether to incorporate generative AI into their teaching. Now, however, at least eight law schools require incoming students—especially in their first year—to undergo training in AI, either during orientation, in legal research and writing classes, or via mandatory standalone courses.
Some of the institutions pioneering the shift include Fordham University, Arizona State University, Stetson University, Suffolk University, Washington University in St. Louis, Case Western, and the University of San Francisco.
There’s a vision that’s been teased Learning & Development for decades: a vision of closing the gap between learning and doing—of moving beyond stopping work to take a course, and instead bringing support directly into the workflow. This concept of “learning in the flow of work” has been imagined, explored, discussed for decades —but never realised. Until now…?
This week, an article published Harvard Business Review provided some some compelling evidence that a long-awaited shift from “courses to coaches” might not just be possible, but also powerful.
…
The two settings were a) traditional in-classroom workshops, led by an expert facilitator and b) AI-coaching, delivered in the flow of work.The results were compelling….
TLDR: The evidence suggests that “learning in the flow of work” is not only feasible as a result of gen AI—it also show potential to be more scalable, more equitable and more efficient than traditional classroom/LMS-centred models.
The 10 Most Popular AI Chatbots For Educators — from techlearning.com by Erik Ofgang Educators don’t need to use each of these chatbots, but it pays to be generally aware of the most popular AI tools
I’ve spent time testing many of these AI chatbots for potential uses and abuses in my own classes, so here’s a quick look at each of the top 10 most popular AI chatbots, and what educators should know about each. If you’re looking for more detail on a specific chatbot, click the link, as either I or other Tech & Learning writers have done deeper dives on all these tools.
Generative artificial intelligence isn’t just a new tool—it’s a catalyst forcing the higher education profession to reimagine its purpose, values, and future.
…
As experts in educational technology, digital literacy, and organizational change, we argue that higher education must seize this moment to rethink not just how we use AI, but how we structure and deliver learning altogether.
Over the past decade, microschools — experimental small schools that often have mixed-age classrooms — have expanded.
…
Some superintendents have touted the promise of microschools as a means for public schools to better serve their communities’ needs while still keeping children enrolled in the district. But under a federal administration that’s trying to dismantle public education and boost homeschool options, others have critiqued poor oversight and a lack of information for assessing these models.
Microschools offer a potential avenue to bring innovative, modern experiences to rural areas, argues Keith Parker, superintendent of Elizabeth City-Pasquotank Public Schools.
Imagining Teaching with AI Agents… — from michellekassorla.substack.com by Michelle Kassorla Teaching with AI is only one step toward educational change, what’s next?
More than two years ago I started teaching with AI in my classes. At first I taught against AI, then I taught with AI, and now I am moving into unknown territory: agents. I played with Manus and n8n and some other agents, but I really never got excited about them. They seemed more trouble than they were worth. It seemed they were no more than an AI taskbot overseeing some other AI bots, and that they weren’t truly collaborating. Now, I’m looking at Perplexity’s Comet browser and their AI agent and I’m starting to get ideas for what the future of education might hold.
I have written several times about the dangers of AI agents and how they fundamentally challenge our systems, especially online education. I know there is no way that we can effectively stop them–maybe slow them a little, but definitely not stop them. I am already seeing calls to block and ban agents–just like I saw (and still see) calls to block and ban AI–but the truth is they are the future of work and, therefore, the future of education.
So, yes! This is my next challenge: teaching with AI agents. I want to explore this idea, and as I started thinking about it, I got more and more excited. But let me back up a bit. What is an agent and how is it different than Generative AI or a bot?
Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.
Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.
This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”
… Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…
AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…
SKYLAND | AI Short Film Fantasy
Skyland is an AI-powered fantasy short film that takes you on a breathtaking journey with Aeryn Solveth and his loyal flying fish. From soaring above the futuristic city of Cybryne to returning to his homeland of Eryndor, Aeryn’s adventure is… https://t.co/Lz6UUxQvExpic.twitter.com/cYXs9nwTX3
What’s wild is that Dinda used a cocktail of AI tools (Adobe Firefly, MidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.
The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.
Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.
The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”
A new way to get things done with your AI browsing assistant Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.
Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.
We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.
Well now, as the corporate learning market shifts to AI, (read the details in our study “The Revolution in Corporate Learning” ), Workday can jump ahead. This is because the $400 billion corporate training market is moving quickly to an AI-Native dynamic content approach (witness OpenAI’s launch of in-line learning in its chatbot). We’re just finishing a year-long study of this space and our detailed report and maturity model will be out in Q4. .
.
With Sana, and a few other AI-native vendors (Uplimit, Arist, Disperz, Docebo), companies can upload audios, videos, documents, and even interviews with experts and the system build learning programs in minutes. We use Sana for Galileo Learn (our AI-powered learning academy for Leadership and HR), and we now have 750+ courses and can build new programs in days instead of months.
And there’s more; this type of system gives every employee a personalized, chat-based experience to learn.