A New AI Career Ladder — from ssir.org (Stanford Social Innovation Review) by Bruno V. Manno; via Matt Tower
The changing nature of jobs means workers need new education and training infrastructure to match.

AI has cannibalized the routine, low-risk work tasks that used to teach newcomers how to operate in complex organizations. Without those task rungs, the climb up the opportunity ladder into better employment options becomes steeper—and for many, impossible. This is not a temporary glitch. AI is reorganizing work, reshaping what knowledge and skills matter, and redefining how people are expected to acquire them.

The consequences ripple from individual career starts to the broader American promise of economic and social mobility, which includes both financial wealth and social wealth that comes from the networks and relationships we build. Yet the same technology that complicates the first job can help us reinvent how experience is earned, validated, and scaled. If we use AI to widen—not narrow—access to education, training, and proof of knowledge and skill, we can build a stronger career ladder to the middle class and beyond. A key part of doing this is a redesign of education, training, and hiring infrastructure.

What’s needed is a redesigned model that treats work as a primary venue for learning, validates capability with evidence, and helps people keep climbing after their first job. Here are ten design principles for a reinvented education and training infrastructure for the AI era.

  1. Create hybrid institutions that erase boundaries. …
  2. Make work-based learning the default, not the exception. …
  3. Create skill adjacencies to speed transitions. …
  4. Place performance-based hiring at the core. 
  5. Ongoing supports and post-placement mobility. 
  6. Portable, machine-readable credentials with proof attached. 
  7. …plus several more…
 

Six Transformative Technology Trends Impacting the Legal Profession — from americanbar.org

Summary

  • Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
  • 75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
  • The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.

Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.

To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.

When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.

  1. Generative AI and AI-Assisted Drafting …
  2. Cloud-Based Practice Management…
  3. Cybersecurity and Data Privacy…
  4. Flat Fee and Alternative Billing Models…
  5. Legal Analytics and Data-Driven Decision Making…
  6. Virtual Law Firms and Remote Work…
 

…the above posting links to:

Higher Ed Is Sleepwalking Toward Obsolescence— And AI Won’t Be the Cause, Just the Accelerant — from substack.com by Steven Mintz
AI Has Exposed Higher Ed’s Hollow Core — The University Must Reinvent Itself or Fade

It begins with a basic reversal of mindset: Stop treating AI as a threat to be policed. Start treating it as the accelerant that finally forces us to build the education we should have created decades ago.

A serious institutional response would demand — at minimum — six structural commitments:

  • Make high-intensity human learning the norm.  …
  • Put active learning at the center, not the margins.  …
  • Replace content transmission with a focus on process.  …
  • Mainstream high-impact practices — stop hoarding them for honors students.  …
  • Redesign assessment to make learning undeniable.  …

And above all: Instructional design can no longer be a private hobby.


Teaching with AI: From Prohibition to Partnership for Critical Thinking — from facultyfocus.com by Michael Kiener, PhD, CRC

How to Integrate AI Developmentally into Your Courses

  • Lower-Level Courses: Focus on building foundational skills, which includes guided instruction on how to use AI responsibly. This moves the strategy beyond mere prohibition.
  • Mid-Level Courses: Use AI as a scaffold where faculty provide specific guidelines on when and how to use the tool, preparing students for greater independence.
  • Upper-Level/Graduate Courses: Empower students to evaluate AI’s role in their learning. This enables them to become self-regulated learners who make informed decisions about their tools.
  • Balanced Approach: Make decisions about AI use based on the content being learned and students’ developmental needs.

Now that you have a framework for how to conceptualize including AI into your courses here are a few ideas on scaffolding AI to allow students to practice using technology and develop cognitive skills.




80 per cent of young people in the UK are using AI for their schoolwork — from aipioneers.org by Graham Attwell

What was encouraging, though, is that students aren’t just passively accepting this new reality. They are actively asking for help. Almost half want their teachers to help them figure out what AI-generated content is trustworthy, and over half want clearer guidelines on when it’s appropriate to use AI in their work. This isn’t a story about students trying to cheat the system; it’s a story about a generation grappling with a powerful new technology and looking to their educators for guidance. It echoes a sentiment I heard at the recent AI Pioneers’ Conference – the issue of AI in education is fundamentally pedagogical and ethical, not just technological.


 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

The new legal intelligence — from jordanfurlong.substack.com by Jordan Furlong
We’ve built machines that can reason like lawyers. Artificial legal intelligence is becoming scalable, portable and accessible in ways lawyers are not. We need to think hard about the implications.

Much of the legal tech world is still talking about Clio CEO Jack Newton’s keynote at last week’s ClioCon, where he announced two major new features: the “Intelligent Legal Work Platform,” which combines legal research, drafting and workflow into a single legal workspace; and “Clio for Enterprise,” a suite of legal work offerings aimed at BigLaw.

Both these features build on Clio’s out-of-nowhere $1B acquisition of vLex (and its legally grounded LLM Vincent) back in June.

A new source of legal intelligence has entered the legal sector.

Legal intelligence, once confined uniquely to lawyers, is now available from machines. That’s going to transform the legal sector.


Where the real action is: enterprise AI’s quiet revolution in legal tech and beyond — from canadianlawyermag.com by Tim Wilbur
Harvey, Clio, and Cohere signal that organizational solutions will lead the next wave of change

The public conversation about artificial intelligence is dominated by the spectacular and the controversial: deepfake videos, AI-induced psychosis, and the privacy risks posed by consumer-facing chatbots like ChatGPT. But while these stories grab headlines, a quieter – and arguably more transformative – revolution is underway in enterprise software. In legal technology, in particular, AI is rapidly reshaping how law firms and legal departments operate and compete. This shift is just one example of how enterprise AI, not just consumer AI, is where real action is happening.

Both Harvey and Clio illustrate a crucial point: the future of legal tech is not about disruption for its own sake, but partnership and integration. Harvey’s collaborations with LexisNexis and others are about creating a cohesive experience for law firms, not rendering them obsolete. As Pereira put it, “We don’t see it so much as disruption. Law firms actually already do this… We see it as ‘how do we help you build infrastructure that supercharges this?’”

The rapid evolution in legal tech is just one example of a broader trend: the real action in AI is happening in enterprise software, not just in consumer-facing products. While ChatGPT and Google’s Gemini dominate the headlines, companies like Cohere are quietly transforming how organizations across industries leverage AI.

Also from canadianlawyermag.com, see:

The AI company’s plan to open an office in Toronto isn’t just about expanding territory – it’s a strategic push to tap into top technical talent and capture a market known for legal innovation.


Unseeable prompt injections in screenshots: more vulnerabilities in Comet and other AI browsers — from brave.com by Artem Chaikin and Shivan Kaul Sahib

Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.

As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simplysummarizing a Reddit postcould result in an attacker being able to steal money or your private data.

The above item was mentioned by Grant Harvey out at The Neuron in the following posting:


Robin AI’s Big Bet on Legal Tech Meets Market Reality — from lawfuel.com

Robin’s Legal Tech Backfire
Robin AI, the poster child for the “AI meets law” revolution, is learning the hard way that venture capital fairy dust doesn’t guarantee happily-ever-after. The London-based legal tech firm, once proudly waving its genAI-plus-human-experts flag, is now cutting staff after growth dreams collided with the brick wall of economic reality.

The company confirmed that redundancies are under way following a failed major funding push. Earlier promises of explosive revenue have fizzled. Despite around $50 million in venture cash over the past two years, Robin’s 2025 numbers have fallen short of investor expectations. The team that once ballooned to 200 is now shrinking.

The field is now swarming with contenders: CLM platforms stuffing genAI into every feature, corporate legal teams bypassing vendors entirely by prodding ChatGPT directly, and new entrants like Harvey and Legora guzzling capital to bulldoze into the market. Even Workday is muscling in.

Meanwhile, ALSPs and AI-powered pseudo-law firms like Crosby and Eudia are eating market share like it’s free pizza. The number of inhouse teams actually buying these tools at scale is still frustratingly small. And investors don’t have much patience for slow burns anymore.


Why Being ‘Rude’ to AI Could Win Your Next Case or Deal — from thebrainyacts.beehiiv.com by Josh Kubicki

TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.



 

Custom AI Development: Evolving from Static AI Systems to Dynamic Learning Agents in 2025 — community.nasscom.in

This blog explores how custom AI development accelerates the evolution from static AI to dynamic learning agents and why this transformation is critical for driving innovation, efficiency, and competitive advantage.

Dynamic Learning Agents: The Next Generation
Dynamic learning agents, sometimes referred to as adaptive or agentic AI, represent a leap forward. They combine continuous learningautonomous action, and context-aware adaptability.

Custom AI development plays a crucial role here: it ensures that these agents are designed specifically for an enterprise’s unique needs rather than relying on generic, one-size-fits-all AI platforms. Tailored dynamic agents can:

  • Continuously learn from incoming data streams
  • Make autonomous, goal-directed decisions aligned with business objectives
  • Adapt behavior in real time based on context and feedback
  • Collaborate with other AI agents and human teams to solve complex challenges

The result is an AI ecosystem that evolves with the business, providing sustained competitive advantage.

Also from community.nasscom.in, see:

Building AI Agents with Multimodal Models: From Perception to Action

Perception: The Foundation of Intelligent Agents
Perception is the first step in building AI agents. It involves capturing and interpreting data from multiple modalities, including text, images, audio, and structured inputs. A multimodal AI agent relies on this comprehensive understanding to make informed decisions.

For example, in healthcare, an AI agent may process electronic health records (text), MRI scans (vision), and patient audio consultations (speech) to build a complete understanding of a patient’s condition. Similarly, in retail, AI agents can analyze purchase histories (structured data), product images (vision), and customer reviews (text) to inform recommendations and marketing strategies.

Effective perception ensures that AI agents have contextual awareness, which is essential for accurate reasoning and appropriate action.


From 70-20-10 to 90-10: a new operating system for L&D in the age of AI? — from linkedin.com by Dr. Philippa Hardman

Also from Philippa, see:



Your New ChatGPT Guide — from wondertools.substack.com by Jeremy Caplan and The PyCoach
25 AI Tips & Tricks from a guest expert

  • ChatGPT can make you more productive or dumber. An MIT study found that while AI can significantly boost productivity, it may also weaken your critical thinking. Use it as an assistant, not a substitute for your brain.
  • If you’re a student, use study mode in ChatGPT, Gemini, or Claude. When this feature is enabled, the chatbots will guide you through problems rather than just giving full answers, so you’ll be doing the critical thinking.
  • ChatGPT and other chatbots can confidently make stuff up (aka AI hallucinations). If you suspect something isn’t right, double-check its answers.
  • NotebookLM hallucinates less than most AI tools, but it requires you to upload sources (PDFs, audio, video) and won’t answer questions beyond those materials. That said, it’s great for students and anyone with materials to upload.
  • Probably the most underrated AI feature is deep research. It automates web searching for you and returns a fully cited report with minimal hallucinations in five to 30 minutes. It’s available in ChatGPT, Perplexity, and Gemini, so give it a try.

 


 

 

Adobe Reinvents its Entire Creative Suite with AI Co-Pilots, Custom Models, and a New Open Platform — from theneuron.ai by Grant Harvey
Adobe just put an AI co-pilot in every one of its apps, letting you chat with Photoshop, train models on your own style, and generate entire videos with a single subscription that now includes top models from Google, Runway, and Pika.

Adobe came to play, y’all.

At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.

Adobe’s new plan is to put an AI co-pilot in every single app.

  • For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
  • For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
  • The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.

Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey
Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.

On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.

From DSC:
As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.


Adobe Max 2025: all the latest creative tools and AI announcements — from theverge.com by Jess Weatherbed

The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.


Also see Adobe Delivers New AI Innovations, Assistants and Models Across Creative Cloud to Empower Creative Professionals plus other items from the News section from Adobe


 

 

“OpenAI’s Atlas: the End of Online Learning—or Just the Beginning?” [Hardman] + other items re: AI in our LE’s

OpenAI’s Atlas: the End of Online Learning—or Just the Beginning? — from drphilippahardman.substack.com by Dr. Philippa Hardman

My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.

Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.

With this in mind, I put together 10 use cases for Atlas for you to try for yourself.

6. Retrieval Practice
What:
Pulling information from memory drives retention better than re-reading.
Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017).
Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.”
Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.




From DSC:
A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.


 

At the most recent NVIDIA GTC conference, held in Washington, D.C. in October 2025, CEO Jensen Huang announced major developments emphasizing the use of AI to “reindustrialize America”. This included new partnerships, expansion of the Blackwell architecture, and advancements in AI factories for robotics and science. The spring 2024 GTC conference, meanwhile, was headlined by the launch of the Blackwell GPU and significant updates to the Omniverse and robotics platforms.

During the keynote in D.C., Jensen Huang focused on American AI leadership and announced several key initiatives.

  • Massive Blackwell GPU deployments: The company announced an expansion of its Blackwell GPU architecture, which first launched in March 2024. Reportedly, the company has already shipped 6 million Blackwell chips, with orders for 14 million more by the end of 2025.
  • AI supercomputers for science: In partnership with the Department of Energy and Oracle, NVIDIA is building new AI supercomputers at Argonne National Laboratory. The largest, named “Solstice,” will deploy 100,000 Blackwell GPUs.
  • 6G infrastructure: NVIDIA announced a partnership with Nokia to develop a U.S.-based, AI-native 6G technology stack.
  • AI factories for robotics: A new AI Factory Research Center in Virginia will use NVIDIA’s technology for building massive-scale data centers for AI.
  • Autonomous robotaxis: The company’s self-driving technology, already adopted by several carmakers, will be used by Uber for an autonomous fleet of 100,000 robotaxis starting in 2027.


Nvidia and Uber team up to develop network of self-driving cars — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) and Uber (UBER) on Tuesday revealed that they’re working to put together what they say will be the world’s largest network of Level 4-ready autonomous cars.

The duo will build out 100,000 vehicles beginning in 2027 using Nvidia’s Drive AGX Hyperion 10 platform and Drive AV software.


Nvidia stock hits all-time high, nears $5 trillion market cap after slew of updates at GTC event — from finance.yahoo.com by Daniel Howley

Nvidia (NVDA) stock on Tuesday rose 5% to close at a record high after the company announced a slew of product updates, partnerships, and investment initiatives at its GTC event in Washington, D.C., putting it on the doorstep of becoming the first company in history with a market value above $5 trillion.

The AI chip giant is approaching the threshold — settling at a market cap of $4.89 trillion on Tuesday — just months after becoming the first to close above $4 trillion in July.


 

Resilient by Design: The Future of America’s Community Colleges — from aacc.nche.edu

This report highlights several truths:

  • Leadership capacity must expand. Presidents and leaders are now expected to be fundraisers, policy navigators, cultural change agents, and data-informed strategists. Leadership can no longer be about a single individual—it must be a team sport. AACC is charged with helping you and your teams build these capacities through leadership academies, peer learning communities, and practical toolkits.
  • The strength of our network is our greatest asset. No college faces its challenges alone, because within our membership there are leaders who have already innovated, stumbled, and succeeded. Resilient by Design urges AACC to serve as the connector and amplifier of this collective wisdom, developing playbooks and scaling proven practices in areas from guided pathways to artificial intelligence to workforce partnerships.
  • Innovation in models and tools is urgent. Budgets must be strategic, business models must be reimagined, and ROI must be proven—not only to funders and policymakers, but to the students and communities we serve. Community colleges must claim their role as engines of economic vitality and social mobility, advancing both immediate workforce needs and long-term wealth-building for students.
  • Policy engagement must be deepened. Federal advocacy remains essential, but the daily realities of our institutions are shaped by state and regional policy. AACC will increasingly support members with state-level resources, legislative templates, and partnerships that equip you to advocate effectively in your unique contexts.
  • Employer engagement must become transformational. Students deserve not just degrees, but careers. The report challenges us to create career-connected colleges where employers co-design curricula, offer meaningful work-based learning, and help ensure graduates are not just prepared for today’s jobs but resilient for tomorrow’s.
 

Entrepreneurship: The New Core Curriculum — from gettingsmart.com by Tom Vander Ark

Key Points

  • Entrepreneurship education fosters resilience, creativity, and financial literacy—skills critical for success in an unpredictable, tech-driven world.
  • Programs like NFTE, Junior Achievement, and Uncharted Learning empower students by offering real-world entrepreneurial experiences and mentorship.

“Entrepreneurship is the job of the future.”

— Charles Fadel, Education for the Age of AI

This shift requires a radical re-evaluation of what we teach. Education leaders across the country are realizing that the most valuable skill we can impart is not accounting or marketing, but the entrepreneurial mindset. This mindset—built on resilience, creative problem-solving, comfort with ambiguity, and the ability to pivot—is essential in startups, as an intrapreuer in big organizations, or as a citizen working for the common good.

 

There is no God Tier video model — from downes.ca by Stephen Downes

From DSC:
Stephen has some solid reflections and asks some excellent questions in this posting, including:

The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?


A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido
Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.

The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.


Claude for Life Sciences — from anthropic.com

Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.

We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.

Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.

To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.


AI as an access tool for neurodiverse and international staff — from timeshighereducation.com by Vanessa Mar-Molinero
Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers

Where AI helps without cutting academic corners
When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:

  • Drafting and tone calibration:
  • Language scaffolding:
  • Structure and templates: ..
  • Summarise and prioritise:
  • Accessibility by default:
  • Idea generation for pedagogy:
  • Translation and cultural mediation:

Beyond learning design: supporting pedagogical innovation in response to AI — from timeshighereducation.com by Charlotte von Essen
To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning


The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
A keynote at the AI and the Liberal Arts Symposium Conference

This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.

It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.

It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).

 

The Most Innovative Law Schools (2025) — from abovethelaw.com by Staci Zaretsky
Forget dusty casebooks — today’s leaders in legal education are using AI, design thinking, and real-world labs to reinvent how law is taught.

“[F]rom AI labs and interdisciplinary centers to data-driven reform and bold new approaches to design and client service,” according to National Jurist’s preLaw Magazine, these are the law schools that “exemplify innovation in action.”

  1. North Carolina Central University School of Law
  2. Suffolk University Law School
  3. UC Berkeley School of Law
  4. Nova Southeastern University Shepard Broad College of Law
  5. Northeastern University School of Law
  6. Maurice A. Deane School of Law at Hofstra University
  7. Seattle University School of Law
  8. Case Western Reserve University School of Law
  9. University of Miami School of Law
  10. Benjamin N. Cardozo School of Law at Yeshiva University
  11. Vanderbilt University Law School
  12. Southwestern Law School

Click here to read short summaries of why each school made this year’s list of top innovators.


Clio’s Metamorphosis: From Practice Management To A Comprehensive AI And Law Practice Provider — from abovethelaw.com by Stephen Embry
Clio is no longer a practice management company. It’s much more of a comprehensive provider of all needs of its customers big and small.

Newton delivered what may have been the most consequential keynote in the company’s history and one that signals a shift by Clio from a traditional practice management provider to a comprehensive platform that essentially does everything for the business and practice of law.

Clio also earlier this year acquired vLex, the heavy-duty AI legal research player. The acquisition is pending regulatory approval. It is the vLex acquisition that is powering the Clio transformation that Newton described in his keynote.

vLex has a huge amount of legal data in its wheelhouse to power sophisticated legal AI research. On top of this data, vLex developed Vincent, a powerful AI tool to work with this data and enable all sorts of actions and work.

This means a couple of things. First, by acquiring vLex, Clio can now offer its customers AI legal research tools. Clio customers will no longer have to go one place for its practice management needs and a second place for its substantive legal work, like research. It makes what Clio can provide much more comprehensive and all inclusive.


‘Adventures In Legal Tech’: How AI Is Changing Law Firms — from abovethelaw.com
Ernie the Attorney shares his legal tech takes.

Artificial intelligence will give solos and small firms “a huge advantage,” according to one legal tech consultant.

In this episode of “Adventures in Legal Tech,” host Jared Correia speaks with Ernie Svenson — aka “Ernie the Attorney” — about the psychology behind resistance to change, how law firms are positioning their AI use, the power of technology for business development, and more.


Legal software: how to look for and compare AI in legal technology — from legal.thomsonreuters.com by Chris O’Leary

Highlights

  • Legal ops experts can categorize legal AI platforms and software by the ability to streamline key tasks such as legal research, document processing or analysis, and drafting.
  • The trustworthiness and accuracy of AI hinge on the quality of its underlying data; solutions like CoCounsel Legal are grounded in authoritative, expert-verified content from Westlaw and Practical Law, unlike providers that may rely on siloed or less reliable databases.
  • When evaluating legal software, firms should use a framework that assesses critical factors such as integration with existing tech stacks, security, scalability, user adoption, and vendor reputation.

ASU Law appoints a director of AI and Legal Tech Studio, advancing its initiative to reimagine legal education — from law.asu.edu

The Sandra Day O’Connor College of Law at Arizona State University appointed Sean Harrington as director of the newly established AI and Legal Tech Studio, a key milestone in ASU Law’s bold initiative to reimagine legal education for the artificial intelligence era. ASU, ranked No. 1 in innovation for the 11th consecutive year, drives AI solutions that enhance teaching, enrich student training and facilitate digital transformation.


The American Legal Technology Awards Name 2025 Winners — from natlawreview.com by Tom Martin

The sixth annual American Legal Technology Awards were presented on Wednesday, October 15th, at Suffolk University Law School (Boston), recognizing winners across ten categories. There were 211 nominees who were evaluated by 27 judges.

The honorees on the night included:

 

2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

“Future of Professionals Report” analysis: Why AI will flip law firm economics — from thomsonreuters.com by Ragunath Ramanathan
AI forces a reinvention of law firm billing models, the market will reward those firms that price by outcome, guarantee efficiency, and are transparent. The question then isn’t whether to change — it’s whether firms will stand on the sidelines or lead

Key insights:

  • Efficiency and cost savings are expected  AI is significantly increasing efficiency and reducing costs in the legal industry, with each lawyer expecting to save 190 work-hours per year by leveraging AI, resulting in approximately $20 billion worth of work-savings in the US alone.
  • Challenges to the billable hour model — The traditional billable hour model is being challenged by AI advancements, as lawyers are now able to complete tasks more efficiently and quickly, leading some law firms to explore alternative pricing models that reflect the value delivered rather than the time spent.
  • Opportunities for smaller law firms — AI presents unique opportunities for smaller law firms to differentiate themselves and compete with larger firms, as AI solutions allow smaller firms to access advanced technology without significant investment and deliver innovative pricing models.

The legal industry is undergoing a significant transformation that’s being driven by the rapid adoption of AI — a shift that is poised to redefine traditional practices, particularly the billable hour model, a cornerstone of law firm operations.

Not surprisingly, AI is anticipated to have the biggest impact on the legal industry over the next five years, with 80% of law firm survey respondents to Thomson Reuters recently published 2025 Future of Professionals report saying that they expect AI to fundamentally alter how they conduct business, especially around how law firms price, staff, and deliver legal work to their clients.


 
© 2025 | Daniel Christian