A New AI Career Ladder — from ssir.org (Stanford Social Innovation Review) by Bruno V. Manno; via Matt Tower The changing nature of jobs means workers need new education and training infrastructure to match.
AI has cannibalized the routine, low-risk work tasks that used to teach newcomers how to operate in complex organizations. Without those task rungs, the climb up the opportunity ladder into better employment options becomes steeper—and for many, impossible. This is not a temporary glitch. AI is reorganizing work, reshaping what knowledge and skills matter, and redefining how people are expected to acquire them.
The consequences ripple from individual career starts to the broader American promise of economic and social mobility, which includes both financial wealth and social wealth that comes from the networks and relationships we build. Yet the same technology that complicates the first job can help us reinvent how experience is earned, validated, and scaled. If we use AI to widen—not narrow—access to education, training, and proof of knowledge and skill, we can build a stronger career ladder to the middle class and beyond. A key part of doing this is a redesign of education, training, and hiring infrastructure.
…
What’s needed is a redesigned model that treats work as a primary venue for learning, validates capability with evidence, and helps people keep climbing after their first job. Here are ten design principles for a reinvented education and training infrastructure for the AI era.
Create hybrid institutions that erase boundaries. …
Make work-based learning the default, not the exception. …
Create skill adjacencies to speed transitions. …
Place performance-based hiring at the core. …
Ongoing supports and post-placement mobility. …
Portable, machine-readable credentials with proof attached. …
Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.
Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.
To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.
When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.
It begins with a basic reversal of mindset: Stop treating AI as a threat to be policed. Start treating it as the accelerant that finally forces us to build the education we should have created decades ago.
A serious institutional response would demand — at minimum — six structural commitments:
Make high-intensity human learning the norm. …
Put active learning at the center, not the margins. …
Replace content transmission with a focus on process. …
Mainstream high-impact practices — stop hoarding them for honors students. …
Redesign assessment to make learning undeniable. …
And above all: Instructional design can no longer be a private hobby.
How to Integrate AI Developmentally into Your Courses
Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.
Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.
What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.
From DSC: One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.
The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.
Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.
A new source of legal intelligence has entered the legal sector.
…
Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.
The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.
Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”
…
The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.
The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.
Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.
As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.
The above item was mentioned by Grant Harvey out at The Neuron in the following posting:
Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.
The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.
The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.
Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.
TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.
Nvidia has officially become the first company in history to cross the $5 trillion market cap, cementing its position as the undisputed leader of the AI era. Just three months ago, the chipmaker hit $4 trillion; it’s already added another trillion since.
Nvidia market cap milestones:
Jan 2020: $144 billion
May 2023: $1 trillion
Feb 2024: $2 trillion
Jun 2024: $3 trillion
Jul 2025: $4 trillion
Oct 2025: $5 trillion
The above posting linked to:
Nvidia becomes first public company worth $5 trillion — from techcrunch.com by Ivan Mehta The biggest beneficiary of the ongoing AI boom, Nvidia has become the first public company to pass the $5 trillion market cap milestone.
My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.
Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.
With this in mind, I put together 10 use cases for Atlas for you to try for yourself.
…
6. Retrieval Practice
What: Pulling information from memory drives retention better than re-reading. Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017). Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.” Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.
From DSC: A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.
On Wednesday [October 29th, 2025], I’m launching the Beta version of an Education Accountability Website (”EDU Accountability Lab”). It analyzes federal student aid, institutional outcomes, and accountability metrics across 6,000+ colleges and universities in the US.
Our Mission The EDU Accountability Lab delivers independent, data-driven analysis of higher education with a focus on accountability, affordability, and outcomes. Our audience includes policymakers, researchers, and taxpayers who seek greater transparency and effectiveness in postsecondary education. We take no advocacy position on specific institutions, programs, metrics, or policies. Our goal is to provide clear and well-documented methods that support policy discussions, strengthen institutional accountability, and improve public understanding of the value of higher education.
But right now, there’s one area demanding urgent attention.
Starting July 1, 2026, every degree program at every institution receiving federal student aid must prove its graduates earn more than people without that credential—or lose Title IV eligibility.
This isn’t about institutions passing or failing. It’s about programs. Every Bachelor’sin Psychology. Every Master’s in Education. Every Associate in Nursing. Each one assessed separately. Each one facing the same pass-or-fail tests.
Leadership capacity must expand. Presidents and leaders are now expected to be fundraisers, policy navigators, cultural change agents, and data-informed strategists. Leadership can no longer be about a single individual—it must be a team sport. AACC is charged with helping you and your teams build these capacities through leadership academies, peer learning communities, and practical toolkits.
The strength of our network is our greatest asset. No college faces its challenges alone, because within our membership there are leaders who have already innovated, stumbled, and succeeded. Resilient by Design urges AACC to serve as the connector and amplifier of this collective wisdom, developing playbooks and scaling proven practices in areas from guided pathways to artificial intelligence to workforce partnerships.
Innovation in models and tools is urgent. Budgets must be strategic, business models must be reimagined, and ROI must be proven—not only to funders and policymakers, but to the students and communities we serve. Community colleges must claim their role as engines of economic vitality and social mobility, advancing both immediate workforce needs and long-term wealth-building for students.
Policy engagement must be deepened. Federal advocacy remains essential, but the daily realities of our institutions are shaped by state and regional policy. AACC will increasingly support members with state-level resources, legislative templates, and partnerships that equip you to advocate effectively in your unique contexts.
Employer engagement must become transformational. Students deserve not just degrees, but careers. The report challenges us to create career-connected colleges where employers co-design curricula, offer meaningful work-based learning, and help ensure graduates are not just prepared for today’s jobs but resilient for tomorrow’s.
In that spirit, in this post I examine a report from Virginia’s Joint Legislative Audit and Review Commission (JLARC) on Virginia’s Community Colleges and the changing higher-education landscape. The report offers a rich view of how several major issues are evolving at the institutional level over time, an instructive case study in big changes and their implications.
Its empirical depth also prompts broader questions we should ask across higher education.
What does the shift toward career education and short-term training mean for institutional costs and funding?
How do we deliver effective student supports as enrollment moves online?
As demand shifts away from on-campus learning, do physical campuses need to get smaller?
Are we seeing a generalizable movement from academic programs to CTE to short-term options? If so, what does that imply for how community colleges are staffed and funded?
As online learning becomes a larger, permanent share of enrollment, do student services need a true bimodal redesign, built to serve both online and on-campus students effectively? Evidence suggests this urgent question is not being addressed, especially in cash-strapped community colleges.
As online learning grows, what happens to physical campuses? Improving space utilization likely means downsizing, which carries other implications. Campuses are community anchors, even for online students—so finding the right balance deserves serious debate.
From DSC: Stephen has some solid reflections and asks some excellent questions in this posting, including:
The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?
A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.
The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.
Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.
We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.
Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.
To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.
AI as an access tool for neurodiverse and international staff— from timeshighereducation.com by Vanessa Mar-Molinero Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers
Where AI helps without cutting academic corners When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:
The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D. A keynote at the AI and the Liberal Arts Symposium Conference
This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.
It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.
It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).
A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.
Veo 3.1 brings richer audio and object-level editing to Google Flow
Sora 2 is here with Cameo self-insertion and collaborative Remix features
Ray3 brings world-first reasoning and HDR to video generation
Kling 2.5 Turbo delivers faster, cheaper, more consistent results
WAN 2.5 revolutionizes talking head creation with perfect audio sync
House of David Season 2 Trailer
HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
Image & Video Prompts
From DSC: By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. LikeThe Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc. Both are very well done. .
[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.
AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.
With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.
ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed
OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.
Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.
The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.
One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.
Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.
The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.
Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.
The challenging U.S. labor market is entering a new normal, according to Goldman Sachs economists David Mericle and Pierfrancesco Mei, who tackled the phenomenon of “jobless growth” in an Oct. 13 note. It resonates with what Federal Reserve Chair Jerome Powell memorably described in September as a “low-hire, low-fire” labor market, in which, for some reason, “kids coming out of college and younger people, minorities, are having a hard time finding jobs.”
Some analysts blame the downturn in entry-level hiring on the impact of AI on the economy, others on macroeconomic uncertainty, especially the seesawing tariffs regime from the Trump administration. The takeaway is clear, though, that getting hired is really hard in the mid-2020s.
This shift is clear in data collated by the investment bank. Payroll growth by industry shows almost all sectors outside health care posting weak, zero, or even negative net job creation, despite otherwise solid macroeconomic indicators. Meanwhile, the share of executives who mention both AI and employment in the same context on earnings calls has reached historic highs.?
…
For now, Mericle’s “low-hire, low-fire” diagnosis serves as both warning and guide: Jobless growth may not mean mass layoffs, but it does mean fewer opportunities for job seekers and slower rebounds from economic shocks in the years to come.?
Experience AI: A new architecture of learning
Experience AI represents a new architecture for learning — one that prioritizes continuity, agency and deep personalization. It fuses three dimensions into a new category of co-intelligent systems:
Agentic AI that evolves with the learner, not just serves them
Persona-based AI that adapts to individual goals, identities and motivations
Multimodal AI that engages across text, voice, video, simulation and interaction
Experience AI brings learning into context. It powers personalized, problem-based journeys where students explore ideas, reflect on progress and co-create meaning — with both human and machine collaborators.
In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.