A New AI Career Ladder — from ssir.org (Stanford Social Innovation Review) by Bruno V. Manno; via Matt Tower
The changing nature of jobs means workers need new education and training infrastructure to match.

AI has cannibalized the routine, low-risk work tasks that used to teach newcomers how to operate in complex organizations. Without those task rungs, the climb up the opportunity ladder into better employment options becomes steeper—and for many, impossible. This is not a temporary glitch. AI is reorganizing work, reshaping what knowledge and skills matter, and redefining how people are expected to acquire them.

The consequences ripple from individual career starts to the broader American promise of economic and social mobility, which includes both financial wealth and social wealth that comes from the networks and relationships we build. Yet the same technology that complicates the first job can help us reinvent how experience is earned, validated, and scaled. If we use AI to widen—not narrow—access to education, training, and proof of knowledge and skill, we can build a stronger career ladder to the middle class and beyond. A key part of doing this is a redesign of education, training, and hiring infrastructure.

What’s needed is a redesigned model that treats work as a primary venue for learning, validates capability with evidence, and helps people keep climbing after their first job. Here are ten design principles for a reinvented education and training infrastructure for the AI era.

  1. Create hybrid institutions that erase boundaries. …
  2. Make work-based learning the default, not the exception. …
  3. Create skill adjacencies to speed transitions. …
  4. Place performance-based hiring at the core. 
  5. Ongoing supports and post-placement mobility. 
  6. Portable, machine-readable credentials with proof attached. 
  7. …plus several more…
 

Net tuition rises at colleges, but costs are far below their peaks — from highereddive.com by Ben Unglesbee
The prices students and their families paid after aid at four-year public colleges and private nonprofits ticked up in 2025-26, per College Board estimates.

Dive Brief:

  • The average tuition and fees paid by students and their families after aid rose slightly for the 2025-26 academic year but remain well below historic peaks, according to the latest higher education pricing study from the College Board.
  • At public four-year colleges, net tuition and fees for first-time, full-time students increased just 1.3% to $2,300 from last year, when adjusted for inflation, according to the College Board’s estimates. That figure is down 48.3% from the peak in 2012-2013.
  • At private nonprofits, net tuition and fees for first-time, full-time students rose 3.7% annually to $16,910 in the 2025-26 year, when adjusted for inflation. By comparison, that’s down 14.6% from the peak for private colleges in 2006-07.

Class of 2025 says they see the effects of a tough job market — from hrdive.com by Kathryn Moody
Young workers have been particularly exposed to the changes brought by artificial intelligence tools, some research has indicated.

The Class of 2025 faced a particularly tough job market, searching for jobs earlier, submitting more applications — averaging 10 applications to the Class of 2024’s six — and receiving fewer offers on average, a National Association of Colleges and Employers study said in a recent report, in partnership with Indeed.

Graduates were more likely to accept those offers, however, even amid uncertainty; 86.7% of those offered a job had accepted in 2025, compared to 81.2% of 2024 graduates.

“Compared to earlier classes, they were more likely to say they were unsure about their plans, and more were planning to enter the military, suggesting they were unsure about private-sector employment,” NACE said in an Oct. 30 announcement regarding the report.


An addendum from DSC:
While we’re talking the workplace, careers, jobs, and such involving higher education, also see:

Careers in Educational Development with Leslie Cramblet Alvarez and Chris Hakala — from intentionalteaching.buzzsprout.com by Derek Bruff

On the show today I talk with Leslie Cramblet Alvarez and Chris Hakala, authors of the new book Understanding Educational Developers: Tales from the Center from Routledge Press. The book blends scholarship and personal narratives to explore the career trajectories of the professionals who work at CTLs (Centers for Teaching & Learning). How do academics move into these careers? And what can these careers look like over time?

Leslie Cramblet Alvarez is assistant vice provost and director of the Office of Teaching and Learning at the University of Denver. Chris Hakala is director for the Center for Excellence in Teaching, Learning, and Scholarship and professor of psychology at Springfield College.

I wanted to talk with Chris and Leslie about what they discovered while writing their book. I also wanted to know what advice they had for navigating educational development careers here in the U.S. in 2025, with higher education under attack from the federal government, a looming demographic cliff affecting enrollment and tuition, and a budget situation that for more institutions is not rosy. Leslie and Chris offer advice for faculty considering a move into a faculty development role, as well as for those of us current working at CTLs trying to plan our careers.

 

News deserts hit new high and 50 million have limited access to local news, study finds — from medill.northwestern.edu
Federal funding cuts to public broadcasting may accelerate local news crisis

EVANSTON, ILL. — The number of local news deserts in the U.S. jumped to record levels this year as newspaper closures continued unabated, and funding cuts to public radio could worsen the problem in coming months, according to the Medill State of Local News Report 2025 released today.

While the local news crisis deepened overall, Medill researchers found cause for optimism — more than 300 local news startups have launched over the past five years, 80% of which were digital-only outlets.

For the fourth consecutive year, the Medill Local News Initiative at Northwestern University’s Medill School of Journalism, Media, Integrated Marketing Communications conducted a months-long, county-by-county survey of local news organizations to identify trends in the rapidly morphing local media landscape. Researchers looked at local newspapers, digital-only sites, ethnic media and public broadcasters.


               


How Local Newsrooms Are Rethinking Political Coverage — from adigaskell.org

For decades, election reporting in the U.S. has leaned heavily on the “horse race”—who’s up, who’s down, and who’s raising the most money. But new research from the University of Kansas suggests that this approach is starting to shift, thanks to a national training program aimed at helping journalists better engage with their communities.

The program, called Democracy SOS, encourages reporters to move beyond headline polls and campaign drama. Instead, it asks them to focus on the issues people care about and explain how those issues are being tackled. In other words: less spectacle, more substance.

 

The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 

Six Transformative Technology Trends Impacting the Legal Profession — from americanbar.org

Summary

  • Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
  • 75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
  • The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.

Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.

To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.

When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.

  1. Generative AI and AI-Assisted Drafting …
  2. Cloud-Based Practice Management…
  3. Cybersecurity and Data Privacy…
  4. Flat Fee and Alternative Billing Models…
  5. Legal Analytics and Data-Driven Decision Making…
  6. Virtual Law Firms and Remote Work…
 

KPMG wants junior consultants to ditch the grunt work and hand it over to teams of AI agents — from businessinsider.com by Polly Thompson

The Big Four consulting and accounting firm is training its junior consultants to manage teams of AI agents — digital assistants capable of completing tasks without human input.

“We want juniors to become managers of agents,” Niale Cleobury, KPMG’s global AI workforce lead, told Business Insider in an interview.

KPMG plans to give new consulting recruits access to a catalog of AI agents capable of creating presentation slides, analyzing data, and conducting in-depth research, Cleobury said.

The goal is for these agents to perform much of the analytical and administrative work once assigned to junior consultants, allowing them to become more involved in strategic decisions.


From DSC:
For a junior staff member to provide quality assurance in working with agents, an employee must know what they’re talking about in the first place. They must have expertise and relevant knowledge. Otherwise, how will they spot the hallucinations?

So the question is, how can businesses build such expertise in junior staff members while they are delegating things to an army of agents? This question applies to the next posting below as well. Having agents report to you is all well and good — IF you know when the agents are producing helpful/accurate information and when they got things all wrong.


This Is the Next Vital Job Skill in the AI Economy — from builtin.com by Saurabh Sharma
The future of tech work belongs to AI managers.

Summary: A fundamental shift is making knowledge workers “AI managers.” The most valuable employees will direct intelligent AI agents, which requires new competencies: delegation, quality assurance and workflow orchestration across multiple agents. Companies must bridge the training gap to enable this move from simple software use to strategic collaboration with intelligent, yet imperfect, systems.

The shift is happening subtly, but it’s happening. Workers are learning to prompt agents, navigate AI capabilities, understand failure modes and hand off complex tasks to AI. And if they haven’t started yet, they probably will: A new study from IDC and Salesforce found that 72 percent of CEOs think most employees will have an AI agent reporting to them within five years. This isn’t about using a new kind of software tool — it’s about directing intelligent systems that can reason, search, analyze and create.

Soon, the most valuable employees won’t just know how to use AI; they’ll know how to manage it. And that requires a fundamentally different skill set than anything we’ve taught in the workplace before.


AI agents failed 97% of freelance tasks; here’s why… — from theneurondaily.com by Grant Harvey

AI Agents Can’t Actually Do Your Job (Yet)—New Benchmark Reveals The Gap

DEEP DIVE: AI can make you faster at your job, but can only do 2-3% of jobs by itself.

The hype: AI agents will automate entire workflows! Replace freelancers! Handle complex tasks end-to-end!

The reality: a measly 2-3% completion rate.

See, Scale AI and CAIS just released the Remote Labor Index (paper), a benchmark where AI agents attempted real freelance tasks. The best-performing model earned just $1,810 out of $143,991 in available work, and yes, finishing only 2-3% of jobs.



 

…the above posting links to:

Higher Ed Is Sleepwalking Toward Obsolescence— And AI Won’t Be the Cause, Just the Accelerant — from substack.com by Steven Mintz
AI Has Exposed Higher Ed’s Hollow Core — The University Must Reinvent Itself or Fade

It begins with a basic reversal of mindset: Stop treating AI as a threat to be policed. Start treating it as the accelerant that finally forces us to build the education we should have created decades ago.

A serious institutional response would demand — at minimum — six structural commitments:

  • Make high-intensity human learning the norm.  …
  • Put active learning at the center, not the margins.  …
  • Replace content transmission with a focus on process.  …
  • Mainstream high-impact practices — stop hoarding them for honors students.  …
  • Redesign assessment to make learning undeniable.  …

And above all: Instructional design can no longer be a private hobby.


Teaching with AI: From Prohibition to Partnership for Critical Thinking — from facultyfocus.com by Michael Kiener, PhD, CRC

How to Integrate AI Developmentally into Your Courses

  • Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
  • Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
  • Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
  • Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.

Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.




80 per cent of young people in the UK are using AI for their schoolwork — from aipioneers.org by Graham Attwell

What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.


 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

Nvidia becomes first $5 trillion company — from theaivallye.com by Barsee
PLUS: OpenAI IPO at $1 trillion valuation by late 2026 / early 2027

Nvidia has officially become the first company in history to cross the $5 trillion market cap, cementing its position as the undisputed leader of the AI era. Just three months ago, the chipmaker hit $4 trillion; it’s already added another trillion since.

Nvidia market cap milestones:

  • Jan 2020: $144 billion
  • May 2023: $1 trillion
  • Feb 2024: $2 trillion
  • Jun 2024: $3 trillion
  • Jul 2025: $4 trillion
  • Oct 2025: $5 trillion

The above posting linked to:

 

 

Adobe Reinvents its Entire Creative Suite with AI Co-Pilots, Custom Models, and a New Open Platform — from theneuron.ai by Grant Harvey
Adobe just put an AI co-pilot in every one of its apps, letting you chat with Photoshop, train models on your own style, and generate entire videos with a single subscription that now includes top models from Google, Runway, and Pika.

Adobe came to play, y’all.

At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.

Adobe’s new plan is to put an AI co-pilot in every single app.

  • For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
  • For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
  • The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.

Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey
Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.

On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.

From DSC:
As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.


Adobe Max 2025: all the latest creative tools and AI announcements — from theverge.com by Jess Weatherbed

The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.


Also see Adobe Delivers New AI Innovations, Assistants and Models Across Creative Cloud to Empower Creative Professionals plus other items from the News section from Adobe


 

 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

Is An Internship In College More Important Than The Degree Itself? — from forbes.com by Brandon Busteed

While confidence in higher education has eroded and more Americans are questioning the importance of a degree, the demand for internships among college students is skyrocketing and the odds of getting an internship at a major company are now lower than getting into the Ivy League. This begs the question: are we at a point where an internship is as valuable – or perhaps more so – than a degree itself?

While concerns about degree ROI were on the rise, the value of internships and other work-integrated learning opportunities was becoming increasingly apparent. New research and analysis have shown us how valuable it is for a student to have an internship during college: it doubles the odds they have a good job waiting for them upon graduation and doubles their odds of being engaged in their work over their lifetime. Although there are some variations in those outcomes by choice of college or academic major, those variations pale in comparison to the impact of having an internship. In short, a collegiate internship experience is a more important indicator of these outcomes than alma mater or major.

 

Chegg CEO steps down amid major AI-driven restructure — from linkedin.com by Megan McDonough

Edtech firm Chegg confirmed Monday it is reducing its workforce by 45%, or 388 employees globally, and its chief executive officer is stepping down. Current CEO Nathan Schultz will be replaced effective immediately by executive chairman (and former CEO) Dan Rosensweig. The rise of AI-powered tools has dealt a massive blow to the online homework helper and led to “substantial” declines in revenue and traffic. Company shares have slipped over 10% this year. Chegg recently explored a possible sale, but ultimately decided to keep the company intact.

 

At the most recent NVIDIA GTC conference, held in Washington, D.C. in October 2025, CEO Jensen Huang announced major developments emphasizing the use of AI to “reindustrialize America”. This included new partnerships, expansion of the Blackwell architecture, and advancements in AI factories for robotics and science. The spring 2024 GTC conference, meanwhile, was headlined by the launch of the Blackwell GPU and significant updates to the Omniverse and robotics platforms.

During the keynote in D.C., Jensen Huang focused on American AI leadership and announced several key initiatives.

  • Massive Blackwell GPU deployments: The company announced an expansion of its Blackwell GPU architecture, which first launched in March 2024. Reportedly, the company has already shipped 6 million Blackwell chips, with orders for 14 million more by the end of 2025.
  • AI supercomputers for science: In partnership with the Department of Energy and Oracle, NVIDIA is building new AI supercomputers at Argonne National Laboratory. The largest, named “Solstice,” will deploy 100,000 Blackwell GPUs.
  • 6G infrastructure: NVIDIA announced a partnership with Nokia to develop a U.S.-based, AI-native 6G technology stack.
  • AI factories for robotics: A new AI Factory Research Center in Virginia will use NVIDIA’s technology for building massive-scale data centers for AI.
  • Autonomous robotaxis: The company’s self-driving technology, already adopted by several carmakers, will be used by Uber for an autonomous fleet of 100,000 robotaxis starting in 2027.


Nvidia and Uber team up to develop network of self-driving cars — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) and Uber (UBER) on Tuesday revealed that they’re working to put together what they say will be the world’s largest network of Level 4-ready autonomous cars.

The duo will build out 100,000 vehicles beginning in 2027 using Nvidia’s Drive AGX Hyperion 10 platform and Drive AV software.


Nvidia stock hits all-time high, nears $5 trillion market cap after slew of updates at GTC event — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) stock on Tuesday rose 5% to close at a record high after the company announced a slew of product updates, partnerships, and investment initiatives at its GTC event in Washington, D.C., putting it on the doorstep of becoming the first company in history with a market value above $5 trillion.

The AI chip giant is approaching the threshold — settling at a market cap of $4.89 trillion on Tuesday — just months after becoming the first to close above $4 trillion in July.


 
© 2025 | Daniel Christian