New approaches to legal service delivery are propelling us into the future. Don’t get left behind! AI and automations are making alternative service delivery easier and more efficient than ever. Dennis & Tom welcome Mathew Kerbis to learn more about his expertise in subscription-based legal services.
What a strong business case includes A credible business case has three core elements: a clear problem statement, a defined solution, and a robust analysis of expected impact. It should also demonstrate that legal has done its homework and thought beyond implementation.
Tech Layoffs 2025: Why AI is Behind the Rising Job Cuts — from finalroundai.com by Kaustubh Saini, Jaya Muvania, and Kaivan Dave; via George Siemens 507 tech workers lose their jobs to AI every day in 2025. Complete breakdown of 94,000 job losses across Microsoft, Tesla, IBM, and Meta – plus which positions are next. .
Amid all the talk about the state of our economy, little noticed and even less discussed was June’s employment data. It showed that the unemployment rate for recent college graduates stood at 5.8%, topping the national level for the first and only time in its 45-year historical record.
It’s an alarming number that needs to be considered in the context of a recent warning from Dario Amodei, CEO of AI juggernaut Anthropic, who predicted artificial intelligence could wipe out half of all entry-level, white-collar-jobs and spike unemployment to 10-20% in the next one to five years.
The upshot: our college graduates’ woes could be just the tip of the spear.
But as I thought about it, it just didn’t feel right. Replying to people sharing real gratitude with a copy-paste message seemed like a terribly inauthentic thing to do. I realized that when you optimize the most human parts of your business, you risk removing the very reason people connect with you in the first place.
How Do You Teach Computer Science in the A.I. Era? — from nytimes.com by Steve Lohr; with thanks to Ryan Craig for this resource Universities across the country are scrambling to understand the implications of generative A.I.’s transformation of technology.
The future of computer science education, Dr. Maher said, is likely to focus less on coding and more on computational thinking and A.I. literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions.
A.I. literacy is an understanding — at varying depths for students at different levels — of how A.I. works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.
At Carnegie Mellon, as faculty members prepare for their gathering, Dr. Cortina said his own view was that the coursework should include instruction in the traditional basics of computing and A.I. principles, followed by plenty of hands-on experience designing software using the new tools.
“We think that’s where it’s going,” he said. “But do we need a more profound change in the curriculum?”
In a landmark deal that will undoubtedly reshape the legal tech landscape, law practice management company Clio has signed a definitive agreement to acquire the AI and legal research company vLex for $1 billion in cash and stock.
The companies say that the acquisition will “establish a new category of intelligent legal technology at the intersection of the business and practice of law, empowering legal professionals to seamlessly manage, research, and execute legal work within a unified system.”
Yoodli is an AI tool designed to help users improve their public speaking skills. It analyzes your speech in real-time or after a recording and gives you feedback on things like:
Filler words (“um,” “like,” “you know”)
Pacing (Are you sprinting or sedating your audience?)
Word choice and sentence complexity
Eye contact and body language (with video)
And yes, even your “uhhh” to actual word ratio
Yoodli gives you a transcript and a confidence score, plus suggestions that range from helpful to brutally honest. It’s basically Simon Cowell with AI ethics and a smiley face interface.
[What’s] going on with AI and education? — from theneuron.ai by Grant Harvey With students and teachers alike using AI, schools are facing an “assessment crisis” where the line between tool and cheating has blurred, forcing a shift away from a broken knowledge economy toward a new focus on building human judgment through strategic struggle.
What to do about it: The future belongs to the “judgment economy,” where knowledge is commoditized but taste, agency, and learning velocity become the new human moats. Use the “Struggle-First” principle: wrestle with problems for 20-30 minutes before turning to AI, then use AI as a sparring partner (not a ghostwriter) to deepen understanding. The goal isn’t to avoid AI, but to strategically choose when to embrace “desirable difficulties” that build genuine expertise versus when to leverage AI for efficiency.
… The Alpha-School Program in brief:
Students complete core academics in just 2 hours using AI tutors, freeing up 4+ hours for life skills, passion projects, and real-world experiences.
The school claims students learn at least 2x faster than their peers in traditional school.
The top 20% of students show 6.5x growth. Classes score in the top 1-2% nationally across the board.
Claims are based on NWEA’s Measures of Academic Progress (MAP) assessments… with data only available to the school. Hmm…
Austen Allred shared a story about the school, which put it on our radar.
.
In the latest installment of Gallup and the Walton Family Foundation’s research on education, K-12 teachers reveal how AI tools are transforming their workloads, instructional quality and classroom optimism. The report finds that 60% of teachers used an AI tool during the 2024–25 school year. Weekly AI users report reclaiming nearly six hours per week — equivalent to six weeks per year — which they reinvest in more personalized instruction, deeper student feedback and better parent communication.
Despite this emerging “AI dividend,” adoption is uneven: 40% of teachers aren’t using AI at all, and only 19% report their school has a formal AI policy. Teachers with access to policies and support save significantly more time.
Educators also say AI improves their work. Most report higher-quality lesson plans, assessments and student feedback. And teachers who regularly use AI are more optimistic about its benefits for student engagement and accessibility — mirroring themes from the Voices of Gen Z: How American Youth View and Use Artificial Intelligence report, which found students hesitant but curious about AI’s classroom role. As AI tools grow more embedded in education, both teachers and students will need the training and support to use them effectively.
What Is Amira Learning?
Amira Learning’s system is built upon research led by Jack Mostow, a professor at Carnegie Mellon who helped pioneer AI literacy education. Amira uses Claude AI to power its AI features, but these features are different than many other AI tools on the market. Instead of focusing on chat and generative response, Amira’s key feature is its advanced speech recognition and natural language processing capabilities, which allow the app to “hear” when a student is struggling and tailor suggestions to that student’s particular mistakes.
Though it’s not meant to replace a teacher, Amira provides real-time feedback and also helps teachers pinpoint where a student is struggling. For these reasons, Amira Learning is a favorite of education scientists and advocates for science of reading-based literacy instruction. The tool currently is used by more than 4 million students worldwide and across the U.S.
Who is leading the pack? Who is setting themselves apart here in the mid-year?
Are they an LMS? LMS/LXP? Talent Development System? Mentoring? Learning Platform?
Something else?
Are they solely customer training/education, mentoring, or coaching? Are they focused only on employees? Are they an amalgamation of all or some?
Well, they cut across the board – hence, they slide under the “Learning Systems” umbrella, which is under the bigger umbrella term – “Learning Technology.”
…
Categories: L&D-specific, Combo (L&D and Training, think internal/external audiences), and Customer Training/Education (this means customer education, which some vendors use to mean the same as customer training).
Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times.
Due to AI, the traditional hiring process has become overwhelmed with automated noise. It’s the résumé equivalent of AI slop—call it “hiring slop,” perhaps—that currently haunts social media and the web with sensational pictures and misleading information. The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.
The Times illustrates the scale of the problem with the story of an HR consultant named Katie Tanner, who was so inundated with over 1,200 applications for a single remote role that she had to remove the post entirely and was still sorting through the applications three months later.
Job growth is slowing — and for many professionals, that means longer job hunts and more competition. As a result, more job seekers are turning to AI to streamline their search and stand out.
From optimizing resumes to preparing for interviews, AI tools are becoming a key part of today’s job hunt. Recruiters say it’s getting harder to sift through application materials and identify what is AI-generated and decipher which applicants are actually qualified — but they also say they prefer candidates with AI skills.
The result? Job seekers are growing their familiarity with AI faster than their non-job-seeking counterparts and it’s shifting how they view the workplace. According to LinkedIn’s latest Workforce Confidence survey, over half of active job seekers (52%) believe AI will eventually take on some of the mundane, manual tasks that they’re currently focused on, compared to 46% of others not actively job seeking.
OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don’t really understand what they’re doing.
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.
Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company’s preparedness framework.
As a result, the company said in a blog post, it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
OpenAI didn’t put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios “We are expecting some of the successors of our o3 (reasoning model) to hit that level.”
While GenAI can create documents or answer questions, agentic AI takes intelligence a step further by planning how to get multi-step work done, including tasks such as consuming information, applying logic, crafting arguments, and then completing them.? This leaves legal teams more time for nuanced decision-making, creative strategy, and relationship-building with clients—work that machines can’t do.
What we’re witnessing is a profession in transition where specific tasks are being augmented or automated while new skills and roles emerge.
The data tells an interesting story: approximately 79% of law firms have integrated AI tools into their workflows, yet only a fraction have truly transformed their operations. Most implementations focus on pattern recognition tasks such as document review, legal research, contract analysis. These implementations aren’t replacing lawyers; they’re redirecting attention to higher-value work.
This technological shift doesn’t happen in isolation. It’s occurring amid client pressure for efficiency, competition from alternative providers, and the expectations of a new generation of lawyers who have never known a world without AI assistance.
Lawyers using the Harvey artificial intelligence platform will soon be able to tap into LexisNexis’ vast legal research capabilities.
Thanks to a new partnership announced Wednesday, Harvey users will be able to ask legal questions and receive fast, citation-backed answers powered by LexisNexis case law, statutes and Shepard’s Citations, streamlining everything from basic research to complex motions. According to a press release, generated responses to user queries will be grounded in LexisNexis’ proprietary knowledge graphs and citation tools—making them more trustworthy for use in court or client work.
10 Legal Tech Companies to Know — from builtin.com These companies are using AI, automation and analytics to transform how legal work gets done. .
Harvey AI, a startup that provides automation for legal work, has raised $300 million in Series E funding at a $5 billion valuation, the company told Fortune. The round was co-led by Kleiner Perkins and Coatue, with participation from existing investors, including Conviction, Elad Gil, OpenAI Startup Fund, and Sequoia.
The billable time revolution — from jordanfurlong.substack.com by Jordan Furlong Gen AI will bring an end to the era when lawyers’ value hinged on performing billable work. Grab the coming opportunity to re-prioritize your daily activities and redefine your professional purpose.
Because of Generative AI, lawyers will perform fewer “billable” tasks in future; but why is that a bad thing? Why not devote that incoming “freed-up” time to operating, upgrading, and flourishing your law practice? Because this is what you do now: You run a legal business. You deliver good outcomes, good experiences, and good relationships to clients. Humans do some of the work and machines do some of the work and the distinction that matters is not billable/non-billable, it’s which type of work is best suited to which type of performer.
Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself.
Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.
First, the easy stuff.
Which AI to Use For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.
This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.
One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.
It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.
…
The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:
What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.
On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.
Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.
Of course, that would be a mistake.
We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.
By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.
Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.
The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.
The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.
Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing
Highlights:
Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.
Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.
Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.
Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.
‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’
The generative AI legal startup Harvey has entered into a strategic alliance with LexisNexis Legal & Professional by which it will integrate LexisNexis’ gen AI technology, primary law content, and Shepard’s Citations within the Harvey platform and jointly develop advanced legal workflows.
As a result of the partnership, Harvey’s customers working within its platform will be able to ask questions of LexisNexis Protégé, the AI legal assistant released in January, and receive AI-generated answers grounded in the LexisNexis collection of U.S. case law and statutes and validated through Shepard’s Citations, the companies said.
It’s not just about redesigning public education—it’s about rethinking how, where and with whom learning happens.Communities across the United States are shaping learner-centered ecosystems and gathering insights along the way.
What does it take to build a learner-centered ecosystem? A shared vision. Distributed leadership. Place-based experiences. Repurposed resources. And more. This piece unpacks 10 real-world insights from pilots in action. .
We believe the path forward is through the cultivation of learner-centered ecosystems — adaptive, networked structures that offer a transformed way of organizing, supporting, and credentialing community-wide learning. These ecosystems break down barriers between schools, communities, and industries, creating flexible, real-world learning experiences that tap into the full range of opportunities a community has to offer.
Last year, we announced our Learner-Centered Ecosystem Lab, a collaborative effort to create a community of practice consisting of twelve diverse sites across the country — from the streets of Brooklyn to the mountains of Ojai — that are demonstrating or piloting ecosystemic approaches. Since then, we’ve been gathering together, learning from one another, and facing the challenges and opportunities of trying to transform public education. And while there is still much more work to be done, we’ve begun to observe a deeper pattern language — one that aligns with our ten-point Ecosystem Readiness Framework, and one that, we hope, can help all communities start to think more practically and creatively about how to transform their own systems of learning.
So while it’s still early, we suspect that the way to establish a healthy learner-centered ecosystem is by paying close attention to the following ten conditions:
Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.
AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.
Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.
The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.
The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.
The details:
Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.
Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.