Beyond ChatGPT: Why In-House Counsel Need Purpose Built AI (Cecilia Ziniti, CEO – GC AI) — from tlpodcast.com

This episode features a conversation with Cecilia Ziniti, Co-Founder and CEO of GC.AI. Cecilia traces her career from the early days of the internet to founding an AI-driven legal platform for in-house counsel.

Cecilia shares her journey, starting as a paralegal at Yahoo in the early 2000s, working on nascent legal issues related to the internet. She discusses her time at Morrison & Foerster and her role at Amazon, where she was an early member of the Alexa team, gaining deep insight into AI’s potential before the rise of modern large language models (LLMs).

The core discussion centers on the creation of GC AI, a legal AI tool specifically designed for in-house counsel. Cecilia explains why general LLMs like ChatGPT are insufficient for professional legal work—lacking proper citation, context, and security/privilege protections. She highlights the app’s features, including enhanced document analysis (RAG implementation), a Word Add-in, and workflow-based playbooks to deliver accurate, client-forward legal analysis. The episode also touches on the current state of legal tech, the growing trend of bringing legal work in-house, and the potential for AI to shift the dynamics of the billable hour.

 

Agents, robots, and us: Skill partnerships in the age of AI — from mckinsey.com by Lareina Yee, Anu Madgavkar, Sven Smit, Alexis Krivkovich, Michael Chui, María Jesús Ramírez, and Diego Castresana
AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines.

At a glance

  • Work in the future will be a partnership between people, agents, and robots—all powered by AI. …
  • Most human skills will endure, though they will be applied differently. …
  • Our new Skill Change Index shows which skills will be most and least exposed to automation in the next five years….
  • Demand for AI fluency—the ability to use and manage AI tools—has grown sevenfold in two years…
  • By 2030, about $2.9 trillion of economic value could be unlocked in the United States…

Also related/see:



State of AI: December 2025 newsletter — from nathanbenaich.substack.com by Nathan Benaich
What you’ve got to know in AI from the last 4 weeks.

Welcome to the latest issue of the State of AI, an editorialized newsletter that covers the key developments in AI policy, research, industry, and start-ups over the last month.


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 

Law Firm 2.0: A Trillion-Dollar Market Begins To Move — from abovethelaw.com by Ken Crutchfield
The test cases for Law Firm 2.0 are arriving faster than many expected.

A move to separate legal advice from other legal services that don’t require advice is a big shift that would ripple through established firms and also test regulatory boundaries.

The LegalTech Fund (TLTF) sees a $1 trillion opportunity to reinvent legal services through the convergence of technology, regulatory changes, and innovation. TLTF calls this movement Law Firm 2.0, and the fund believes a reinvention will pave the way for entirely new, tech-enabled models of legal service delivery.


From Paper to Platform: How LegalTech Is Revolutionizing the Practice of Law — from markets.financialcontent.com by AB Newswire

For decades, practicing law has been a business about paper — contracts, case files, court documents, and floor-to-ceiling piles of precedent. But as technology transforms all aspects of modern-day business, law firms and in-house legal teams are transforming along with it. The development of LegalTech has revolutionized what was previously a paper-driven, manpower-intensive profession into a data-driven digital web of collaboration and automation.

Conclusion: Building the Future of Law
The practice of law has always been about accuracy, precedent, and human beings. Technology doesn’t alter that — it magnifies it. The shift to the platform from paper is about liberating lawyers from back-office tasks so they can concentrate on strategy, advocacy, and creativity.

By coupling intelligent automation with moral obligation, today’s firms are positioning the legal profession for a more intelligent, responsive industry. LegalTech isn’t about automation, it’s about empowering attorneys to practice at the speed of today’s business.


What Legal Can Learn from Other Industries’ AI Transformations — from jdsupra.com

Artificial intelligence has already redefined how industries like finance, healthcare, and supply chain operate — transforming once-manual processes into predictive, data-driven engines of efficiency.

Yet the legal industry, while increasingly open to innovation, still lags behind its peers in adopting automation at scale. As corporate legal departments face mounting pressure to do more with less, they have an opportunity to learn from how other sectors successfully integrated AI into their operations.

The message is clear: AI transformation doesn’t just change workflows — it changes what’s possible.


7 Legal Tech Trends To Watch In 2026 — from lexology.com


Small Language Models Are Changing Legal Tech: What That Means for Lawyers and Law Firms — from community.nasscom.in

The legal profession is at a turning point. Artificial intelligence tools are moving from novelty to everyday utility, and small language models, or SLMs, are a major reason why. For law firms and in-house legal teams that are balancing client confidentiality, tight budgets, and the need to move faster, SLMs offer a practical, high impact way to bring legal AI into routine practice. This article explains what SLMs are, why they matter to lawyers, where they fit in legal workflows, and how to adopt them responsibly.


Legal AI startup draws new $50 million Blackstone investment, opens law firm — from reuters.com by Sara Merken

NEW YORK, Nov 20 (Reuters) – Asset manager Blackstone (BX.N), opens new tab has invested $50 million in Norm Ai, a legal and compliance technology startup that also said on Thursday that it is launching an independent law firm that will offer “AI-native legal services.”

Lawyers at the new New York-based firm, Norm Law LLP, will use Norm Ai’s artificial intelligence technology to do legal work for Blackstone and other financial services clients, said Norm Ai founder and CEO John Nay.


Law School Toolbox Podcast Episode 531: What Law Students Should Know About New Legal Tech (w/Gabe Teninbaum) — from jdsupra.com

Today, Alison and Gabe Teninbaum — law professor and creator of SpacedRepetition.com — discuss how technology is rapidly transforming the legal profession, emphasizing the importance for law students and lawyers to develop technological competence and adapt to new tools and roles in the legal profession.  


New York is the San Francisco of legal tech — from businessinsider.com by Melia Russell

  • Legal tech ?? NYC.
  • To win the market, startups say they need to be where the law firms and corporate legal chiefs are.
  • Legora and Harvey are expanding their footprints in New York, as Clio hunts for office space.

Legal Tech Startups Expand in New York to Access Law Firms — from indexbox.io

Several legal technology startups are expanding their physical presence in New York City, according to a report from Legal tech NYC. The companies state that to win market share, they need to be located where major law firms and corporate legal departments are based.


Linklaters unveils 20-strong ‘AI lawyer’ team — from legalcheek.com by Legal Cheek

Magic Circle giant Linklaters has launched a team of 20 ‘AI Lawyers’ (yes, that is their actual job title) as it ramps up its commitment to artificial intelligence across its global offices.

The new cohort is a mix of external tech specialists and Linklaters lawyers who have decided to boost their legal expertise with advanced AI know-how. They will be placed into practice groups around the world to help build prompts, workflows and other tech driven processes that the firm hopes will sharpen client delivery.


I went to a closed-door retreat for top lawyers. The message was clear: Don’t fear AI — use it. — from businessinsider.com by Melia Russell

  • AI is making its mark on law firms and corporate legal teams.
  • Clients expect measurable savings, and firms are spending real money to deliver them.
  • At TLTF Summit, Big Law leaders and legal-tech builders explored the future of the industry.

From Cost Center to Command Center: The Future of Litigation is Being Built In-House — from law.stanford.edu by Adam Rouse,  Tamra Moore, Renee Meisel, Kassi Burns, & Olga Mack

Litigation isn’t going away, but who leads, drafts, and drives it is rapidly changing. Empirical research shows corporate legal departments have steadily expanded litigation management functions over the past decade. (Annual Litigation Trends Survey, Norton Rose Fulbright (2025)).

For decades, litigation lived squarely in the law firm domain. (Wald, Eli, Getting in and Out of the House: Career Trajectories of In-House Lawyers, Fordham Law Review, Vol. 88, No. 1765, 2020 (June 22, 2020)). Corporate legal departments played a responsive role: approving strategies, reviewing documents, and paying hourly rates. But through dozens of recent conversations with in-house legal leaders, legal operations professionals, and litigation specialists, a new reality is emerging. One in which in-house counsel increasingly owns the first draft, systematizes their litigation approach, and reshapes how outside counsel fits into the picture.

AI, analytics, exemplar libraries, playbooks, and modular document builders are not simply tools. They are catalysts for a structural shift. Litigation is becoming modular, data-informed, and orchestrated by in-house teams who increasingly want more than cost control. They want consistency, clarity, and leverage. This piece outlines five major trends from our qualitative research, predictions on their impact to the practice of law, and research questions that are worth considering to further understand these trends. A model is then introduced for understanding how litigation workflows and outside counsel relationships will evolve in the coming years.

 

Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage
As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.

Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.


DEEP DIVE: The AI user interface of the future = Voice — from theneurondaily.com by Grant Harvey
PLUS: Gemini 3.0 and Microsoft’s new voice features

Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.

To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.

The Great Convergence: Why Voice Is Having Its Moment
Three massive shifts just collided to make voice interfaces inevitable.

    1. First, speech recognition stopped being terrible. …
    2. Second, our devices got ears everywhere. …
    3. Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …

Introducing group chats in ChatGPT — from openai.com
Collaborate with others, and ChatGPT, in the same conversation.

Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.

Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.

Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.




 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 

Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 

Free Music Discovery Tools — from wondertools.substack.com by Jeremy Caplan and Chris Dalla Riva
Travel through time and around the world with sound

I love apps like Metronaut and Tomplay, which let me carry a collection of classical (sheet) music on my phone. They also provide piano or orchestral accompaniment for any violin piece I want to play.

Today’s post shares 10 other recommended tools for music lovers from my fellow writer and friend, Chris Dalla Riva, who writes Can’t Get Much Higher, a popular Substack focused on the intersection of music and data. I invited Chris to share with you his favorite resources for discovering, learning, and creating music.

Sections include:

  • Learn about Music
  • Discover New Music
  • Learn an Instrument
  • Tools for Artists
 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 




BIG unveils Suzhou Museum of Contemporary Art topped with ribbon-like roof — from dezeen.com by Christina Yao
.

Also from Dezeen:

MVRDV designs giant sphere for sports arena in Tirana — from dezeen.com by Starr Charles
.



 

The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 

Six Transformative Technology Trends Impacting the Legal Profession — from americanbar.org

Summary

  • Law firm leaders should evaluate their legal technology and decide if they are truly helping legal work or causing a disconnect between human and AI contributions.
  • 75% of firms now rely on cloud platforms for everything from document storage to client collaboration.
  • The rise of virtual law firms and remote work is reshaping the profession’s culture. Hybrid and remote-first models, supported by cloud and collaboration tools, are growing.

Are we truly innovating, or just rearranging the furniture? That’s the question every law firm leader should be asking as the legal technology landscape shifts beneath our feet. There are many different thoughts and opinions on how the legal technology landscape will evolve in the coming years, particularly regarding the pace of generative AI-driven changes and the magnitude of these changes.

To try to answer the question posed above, we looked at six recently published technology trends reports from influential entities in the legal technology arena: the American Bar Association, Clio, Wolters Kluwer, Lexis Nexis, Thomson Reuters, and NetDocuments.

When we compared these reports, we found them to be remarkably consistent. While the level of detail on some topics varied across the reports, they identified six trends that are reshaping the very core of legal practice. These trends are summarized in the following paragraphs.

  1. Generative AI and AI-Assisted Drafting …
  2. Cloud-Based Practice Management…
  3. Cybersecurity and Data Privacy…
  4. Flat Fee and Alternative Billing Models…
  5. Legal Analytics and Data-Driven Decision Making…
  6. Virtual Law Firms and Remote Work…
 

KPMG wants junior consultants to ditch the grunt work and hand it over to teams of AI agents — from businessinsider.com by Polly Thompson

The Big Four consulting and accounting firm is training its junior consultants to manage teams of AI agents — digital assistants capable of completing tasks without human input.

“We want juniors to become managers of agents,” Niale Cleobury, KPMG’s global AI workforce lead, told Business Insider in an interview.

KPMG plans to give new consulting recruits access to a catalog of AI agents capable of creating presentation slides, analyzing data, and conducting in-depth research, Cleobury said.

The goal is for these agents to perform much of the analytical and administrative work once assigned to junior consultants, allowing them to become more involved in strategic decisions.


From DSC:
For a junior staff member to provide quality assurance in working with agents, an employee must know what they’re talking about in the first place. They must have expertise and relevant knowledge. Otherwise, how will they spot the hallucinations?

So the question is, how can businesses build such expertise in junior staff members while they are delegating things to an army of agents? This question applies to the next posting below as well. Having agents report to you is all well and good — IF you know when the agents are producing helpful/accurate information and when they got things all wrong.


This Is the Next Vital Job Skill in the AI Economy — from builtin.com by Saurabh Sharma
The future of tech work belongs to AI managers.

Summary: A fundamental shift is making knowledge workers “AI managers.” The most valuable employees will direct intelligent AI agents, which requires new competencies: delegation, quality assurance and workflow orchestration across multiple agents. Companies must bridge the training gap to enable this move from simple software use to strategic collaboration with intelligent, yet imperfect, systems.

The shift is happening subtly, but it’s happening. Workers are learning to prompt agents, navigate AI capabilities, understand failure modes and hand off complex tasks to AI. And if they haven’t started yet, they probably will: A new study from IDC and Salesforce found that 72 percent of CEOs think most employees will have an AI agent reporting to them within five years. This isn’t about using a new kind of software tool — it’s about directing intelligent systems that can reason, search, analyze and create.

Soon, the most valuable employees won’t just know how to use AI; they’ll know how to manage it. And that requires a fundamentally different skill set than anything we’ve taught in the workplace before.


AI agents failed 97% of freelance tasks; here’s why… — from theneurondaily.com by Grant Harvey

AI Agents Can’t Actually Do Your Job (Yet)—New Benchmark Reveals The Gap

DEEP DIVE: AI can make you faster at your job, but can only do 2-3% of jobs by itself.

The hype: AI agents will automate entire workflows! Replace freelancers! Handle complex tasks end-to-end!

The reality: a measly 2-3% completion rate.

See, Scale AI and CAIS just released the Remote Labor Index (paper), a benchmark where AI agents attempted real freelance tasks. The best-performing model earned just $1,810 out of $143,991 in available work, and yes, finishing only 2-3% of jobs.



 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 
© 2025 | Daniel Christian