Coursera to Combine with Udemy to Empower the Global Workforce with Skills for the AI Era — from investor.coursera.com

Highly Complementary Capabilities Will Create a Leading Technology Platform, Redefining Skills Discovery, Development, and Mastery for Learners and Organizations at Scale

Unites Udemy’s Dynamic AI-Powered Skills Development Marketplace with World-Class University and Industry Brands Under the Coursera Ecosystem, Expanding Value, Impact, and Choice Globally

Strengthens Combined Company’s Financial Profile with Pro Forma Annual Revenue of More Than $1.5 Billion and Anticipated Annual Run-Rate Cost Synergies of $115 Million Within 24 Months

“We’re at a pivotal moment in which AI is rapidly redefining the skills required for every job across every industry. Organizations and individuals around the world need a platform that is as agile as the new and emerging skills learners must master,” said Greg Hart, CEO of Coursera. “By combining the highly complementary strengths of Coursera and Udemy, we will be in an even stronger position to address the global talent transformation opportunity, unlock a faster pace of innovation, and deliver valuable experiences and outcomes for our learners and customers. Together, we will ensure our millions of learners, thousands of enterprise, university, and government customers, and expert instructors have a platform to keep pace with technology acceleration.”

 
 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 

7 Legal Tech Trends That Will Reshape Every Business In 2026 — from forbes.com by Bernard Marr

Here are the trends that will matter most.

  1. AI Agents As Legal Assistants
  2. AI As A Driver Of Business Strategy
  3. Automation In Judicial Administration
  4. Always-On Compliance Monitoring
  5. Cybersecurity As An Essential Survival Tool
  6. Predictive Litigation
  7. Compliance As Part Of The Everyday Automation Fabric

According to the Thomson Reuters Future Of Professionals report, most experts already expect AI to transform their work within five years, with many viewing it as a positive force. The challenge now is clear: legal and compliance leaders must understand the tools reshaping their field and prepare their teams for a very different way of working in 2026.


Addendum on 12/17/25:

 

Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage
As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.

Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.


DEEP DIVE: The AI user interface of the future = Voice — from theneurondaily.com by Grant Harvey
PLUS: Gemini 3.0 and Microsoft’s new voice features

Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.

To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.

The Great Convergence: Why Voice Is Having Its Moment
Three massive shifts just collided to make voice interfaces inevitable.

    1. First, speech recognition stopped being terrible. …
    2. Second, our devices got ears everywhere. …
    3. Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …

Introducing group chats in ChatGPT — from openai.com
Collaborate with others, and ChatGPT, in the same conversation.

Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.

Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.

Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.




 

New Study: Business As Usual Could Doom Dozens Of New England Colleges — from forbes.com by Michael B. Horn

The cause of the challenges isn’t one single factor, but a series of pressures from demographic changes, shifts in the public’s perception of higher education’s value, rising operating costs, emerging alternatives to traditional colleges, and, of late, changes in federal policies and programs. The net effect is that many institutions are much closer to the brink of closure than ever before.

What’s daunting is that flat enrollment is almost certainly an overly optimistic scenario.

If enrollment at the 44 schools falls by 15 percent over the next four years and business proceeds as usual, then 28 of the schools will have less than 10 years of cash and unrestricted quasi-endowments before they would become insolvent—assuming no major cuts, additional philanthropy, new debt, or asset sales. Fourteen would have less than five years before insolvency.

Also see:

From DSC:
The cultures at many institutions of traditional higher education will make some of the necessary changes and strategies (that Michael and Steven discuss) very hard to make. For example, to merge with another institution or institutions. Such a strategy could be very challenging to implement, even as alternatives continue to emerge.

 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 

A New AI Career Ladder — from ssir.org (Stanford Social Innovation Review) by Bruno V. Manno; via Matt Tower
The changing nature of jobs means workers need new education and training infrastructure to match.

AI has cannibalized the routine, low-risk work tasks that used to teach newcomers how to operate in complex organizations. Without those task rungs, the climb up the opportunity ladder into better employment options becomes steeper—and for many, impossible. This is not a temporary glitch. AI is reorganizing work, reshaping what knowledge and skills matter, and redefining how people are expected to acquire them.

The consequences ripple from individual career starts to the broader American promise of economic and social mobility, which includes both financial wealth and social wealth that comes from the networks and relationships we build. Yet the same technology that complicates the first job can help us reinvent how experience is earned, validated, and scaled. If we use AI to widen—not narrow—access to education, training, and proof of knowledge and skill, we can build a stronger career ladder to the middle class and beyond. A key part of doing this is a redesign of education, training, and hiring infrastructure.

What’s needed is a redesigned model that treats work as a primary venue for learning, validates capability with evidence, and helps people keep climbing after their first job. Here are ten design principles for a reinvented education and training infrastructure for the AI era.

  1. Create hybrid institutions that erase boundaries. …
  2. Make work-based learning the default, not the exception. …
  3. Create skill adjacencies to speed transitions. …
  4. Place performance-based hiring at the core. 
  5. Ongoing supports and post-placement mobility. 
  6. Portable, machine-readable credentials with proof attached. 
  7. …plus several more…
 

Six Transformative Technology Trends Impacting the Legal Profession — from americanbar.org

Summary

  • Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
  • 75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
  • The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.

Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.

To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.

When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.

  1. Generative AI and AI-Assisted Drafting …
  2. Cloud-Based Practice Management…
  3. Cybersecurity and Data Privacy…
  4. Flat Fee and Alternative Billing Models…
  5. Legal Analytics and Data-Driven Decision Making…
  6. Virtual Law Firms and Remote Work…
 

KPMG wants junior consultants to ditch the grunt work and hand it over to teams of AI agents — from businessinsider.com by Polly Thompson

The Big Four consulting and accounting firm is training its junior consultants to manage teams of AI agents — digital assistants capable of completing tasks without human input.

“We want juniors to become managers of agents,” Niale Cleobury, KPMG’s global AI workforce lead, told Business Insider in an interview.

KPMG plans to give new consulting recruits access to a catalog of AI agents capable of creating presentation slides, analyzing data, and conducting in-depth research, Cleobury said.

The goal is for these agents to perform much of the analytical and administrative work once assigned to junior consultants, allowing them to become more involved in strategic decisions.


From DSC:
For a junior staff member to provide quality assurance in working with agents, an employee must know what they’re talking about in the first place. They must have expertise and relevant knowledge. Otherwise, how will they spot the hallucinations?

So the question is, how can businesses build such expertise in junior staff members while they are delegating things to an army of agents? This question applies to the next posting below as well. Having agents report to you is all well and good — IF you know when the agents are producing helpful/accurate information and when they got things all wrong.


This Is the Next Vital Job Skill in the AI Economy — from builtin.com by Saurabh Sharma
The future of tech work belongs to AI managers.

Summary: A fundamental shift is making knowledge workers “AI managers.” The most valuable employees will direct intelligent AI agents, which requires new competencies: delegation, quality assurance and workflow orchestration across multiple agents. Companies must bridge the training gap to enable this move from simple software use to strategic collaboration with intelligent, yet imperfect, systems.

The shift is happening subtly, but it’s happening. Workers are learning to prompt agents, navigate AI capabilities, understand failure modes and hand off complex tasks to AI. And if they haven’t started yet, they probably will: A new study from IDC and Salesforce found that 72 percent of CEOs think most employees will have an AI agent reporting to them within five years. This isn’t about using a new kind of software tool — it’s about directing intelligent systems that can reason, search, analyze and create.

Soon, the most valuable employees won’t just know how to use AI; they’ll know how to manage it. And that requires a fundamentally different skill set than anything we’ve taught in the workplace before.


AI agents failed 97% of freelance tasks; here’s why… — from theneurondaily.com by Grant Harvey

AI Agents Can’t Actually Do Your Job (Yet)—New Benchmark Reveals The Gap

DEEP DIVE: AI can make you faster at your job, but can only do 2-3% of jobs by itself.

The hype: AI agents will automate entire workflows! Replace freelancers! Handle complex tasks end-to-end!

The reality: a measly 2-3% completion rate.

See, Scale AI and CAIS just released the Remote Labor Index (paper), a benchmark where AI agents attempted real freelance tasks. The best-performing model earned just $1,810 out of $143,991 in available work, and yes, finishing only 2-3% of jobs.



 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

At the most recent NVIDIA GTC conference, held in Washington, D.C. in October 2025, CEO Jensen Huang announced major developments emphasizing the use of AI to “reindustrialize America”. This included new partnerships, expansion of the Blackwell architecture, and advancements in AI factories for robotics and science. The spring 2024 GTC conference, meanwhile, was headlined by the launch of the Blackwell GPU and significant updates to the Omniverse and robotics platforms.

During the keynote in D.C., Jensen Huang focused on American AI leadership and announced several key initiatives.

  • Massive Blackwell GPU deployments: The company announced an expansion of its Blackwell GPU architecture, which first launched in March 2024. Reportedly, the company has already shipped 6 million Blackwell chips, with orders for 14 million more by the end of 2025.
  • AI supercomputers for science: In partnership with the Department of Energy and Oracle, NVIDIA is building new AI supercomputers at Argonne National Laboratory. The largest, named “Solstice,” will deploy 100,000 Blackwell GPUs.
  • 6G infrastructure: NVIDIA announced a partnership with Nokia to develop a U.S.-based, AI-native 6G technology stack.
  • AI factories for robotics: A new AI Factory Research Center in Virginia will use NVIDIA’s technology for building massive-scale data centers for AI.
  • Autonomous robotaxis: The company’s self-driving technology, already adopted by several carmakers, will be used by Uber for an autonomous fleet of 100,000 robotaxis starting in 2027.


Nvidia and Uber team up to develop network of self-driving cars — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) and Uber (UBER) on Tuesday revealed that they’re working to put together what they say will be the world’s largest network of Level 4-ready autonomous cars.

The duo will build out 100,000 vehicles beginning in 2027 using Nvidia’s Drive AGX Hyperion 10 platform and Drive AV software.


Nvidia stock hits all-time high, nears $5 trillion market cap after slew of updates at GTC event — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) stock on Tuesday rose 5% to close at a record high after the company announced a slew of product updates, partnerships, and investment initiatives at its GTC event in Washington, D.C., putting it on the doorstep of becoming the first company in history with a market value above $5 trillion.

The AI chip giant is approaching the threshold — settling at a market cap of $4.89 trillion on Tuesday — just months after becoming the first to close above $4 trillion in July.


 

Resilient by Design: The Future of America’s Community Colleges — from aacc.nche.edu

This report highlights several truths:

  • Leadership capacity must expand. Presidents and leaders are now expected to be fundraisers, policy navigators, cultural change agents, and data-informed strategists. Leadership can no longer be about a single individual—it must be a team sport. AACC is charged with helping you and your teams build these capacities through leadership academies, peer learning communities, and practical toolkits.
  • The strength of our network is our greatest asset. No college faces its challenges alone, because within our membership there are leaders who have already innovated, stumbled, and succeeded. Resilient by Design urges AACC to serve as the connector and amplifier of this collective wisdom, developing playbooks and scaling proven practices in areas from guided pathways to artificial intelligence to workforce partnerships.
  • Innovation in models and tools is urgent. Budgets must be strategic, business models must be reimagined, and ROI must be proven—not only to funders and policymakers, but to the students and communities we serve. Community colleges must claim their role as engines of economic vitality and social mobility, advancing both immediate workforce needs and long-term wealth-building for students.
  • Policy engagement must be deepened. Federal advocacy remains essential, but the daily realities of our institutions are shaped by state and regional policy. AACC will increasingly support members with state-level resources, legislative templates, and partnerships that equip you to advocate effectively in your unique contexts.
  • Employer engagement must become transformational. Students deserve not just degrees, but careers. The report challenges us to create career-connected colleges where employers co-design curricula, offer meaningful work-based learning, and help ensure graduates are not just prepared for today’s jobs but resilient for tomorrow’s.
 

Ground-level Impacts of the Changing Landscape of Higher Education — from onedtech.philhillaa.com by Glenda Morgan; emphasis DSC
Evidence from the Virginia Community College System

In that spirit, in this post I examine a report from Virginia’s Joint Legislative Audit and Review Commission (JLARC) on Virginia’s Community Colleges and the changing higher-education landscape. The report offers a rich view of how several major issues are evolving at the institutional level over time, an instructive case study in big changes and their implications.

Its empirical depth also prompts broader questions we should ask across higher education.

  • What does the shift toward career education and short-term training mean for institutional costs and funding?
  • How do we deliver effective student supports as enrollment moves online?
  • As demand shifts away from on-campus learning, do physical campuses need to get smaller?
  • Are we seeing a generalizable movement from academic programs to CTE to short-term options? If so, what does that imply for how community colleges are staffed and funded?
  • As online learning becomes a larger, permanent share of enrollment, do student services need a true bimodal redesign, built to serve both online and on-campus students effectively? Evidence suggests this urgent question is not being addressed, especially in cash-strapped community colleges.
  • As online learning grows, what happens to physical campuses? Improving space utilization likely means downsizing, which carries other implications. Campuses are community anchors, even for online students—so finding the right balance deserves serious debate.
 
© 2025 | Daniel Christian