2025: The Year the Frontier Firm Is Born — from Microsoft

We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.

As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.

To help leaders understand how knowledge work will evolve, Microsoft analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor market trends, and trillions of Microsoft 365 productivity signals. We also spoke with AI-native startups, academics, economists, scientists, and thought leaders to explore what work could become. The data and insights point to the emergence of an entirely new organization, a Frontier Firm that looks markedly different from those we know today. Structured around on-demand intelligence and powered by “hybrid” teams of humans + agents, these companies scale rapidly, operate with agility, and generate value faster.

Frontier Firms are already taking shape, and within the next 2–5 years we expect that every organization will be on their journey to becoming one. 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. Adoption is accelerating: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.

The time to act is now. The question for every leader and employee is: how will you adapt?


On a somewhat related note, also see:

Exclusive: Anthropic warns fully AI employees are a year away — from axios.com by Sam Sabin

Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.

Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.

The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.

 

Thomson Reuters Survey: Over 95% of Legal Professionals Expect Gen AI to Become Central to Workflow Within Five Year — from lawnext.com by Bob Ambrogi

Thomson Reuters today released its 2025 Generative AI in Professional Services Report, and it reveals that legal professionals have become increasingly optimistic about generative AI, with adoption rates nearly doubling over the past year and a growing belief that the technology should be incorporated into legal work.

According to the report, 26% of legal organizations are now actively using gen AI, up from 14% in 2024. While only 15% of law firm respondents say gen AI is currently central to their workflow, a striking 78% believe it will become central within the next five years.


AI-Powered Legal Work Redefined: Libra Launches Major Update for Legal Professionals — from lawnext.com by Bob Ambrogi

Berlin, April 14, 2025 – Berlin-based Legal Tech startup Libra is launching its most comprehensive update to date, leveraging AI to relieve law firms and legal departments of routine tasks, accelerate research, and improve team collaboration. “Libra v2” combines highly developed AI, a modern user interface, and practical tools to set a new standard for efficient and precise work in all legal areas.

“We listened intently to feedback from law firms and in-house teams,” said Viktor von Essen, founder of Libra. “The result is Libra v2: an AI solution that intelligently supports every step of daily legal work – from initial research to final contract review. We want legal experts to be able to fully concentrate on what is essential: excellent legal advice.”


The Three Cs of Teaching Technology to Law Students — from lawnext.com by Bob Ambrogi

In law practice today, technology is no longer optional — it’s essential. As practicing attorneys increasingly rely on technology tools to serve clients, conduct research, manage documents and streamline workflows, the question is often debated: Are law schools adequately preparing students for this reality?

Unfortunately, for the majority of law schools, the answer is no. But that only begs the question: What should they be doing?

A coincidence of events last week had me thinking about law schools and legal tech, chief among them my attendance at LIT Con, Suffolk Law School’s annual conference to showcase legal innovation and technology — with a portion of it devoted to access-to-justice projects developed by Suffolk Law students themselves.


While not from Bob, I’m also going to include this one here:

Your AI Options: 7 Considerations Before You Buy — from artificiallawyer.com by Liza Pestillos-Ocat

But here’s the problem: not all AI is useful and not all of it is built for the way your legal team works.

Most firms aren’t asking whether they should use AI because they already are. The real question now is what comes next? How do you expand the value of AI across more teams, more matters, and more workflows without introducing unnecessary risk, complexity, or cost?

To get this right, legal professionals need to understand which tools will solve real problems and deliver the most value to their team. That starts with asking better questions, including the ones that follow, before making your next investment in AI for lawyers.

 

Organizing Teams for Continuous Learning: A Complete Guide — from intelligenthq.com

In today’s fast-paced business world, continuous learning has become a vital element for both individual and organizational growth. Teams that foster a culture of learning remain adaptable, innovative, and competitive. However, simply encouraging learning isn’t enough; the way teams are structured and supported plays a huge role in achieving long-term success. In this guide, we’ll explore how to effectively organize teams for continuous learning, leveraging tools, strategies, and best practices.

 

Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.


Addendum on 4/16:

 

Is collaboration the key to digital accessibility? — from timeshighereducation.com by Sal Jarvis and George Rhodes
Digital accessibility is ethically important, and a legal requirement, but it’s also a lot of work. Here’s how universities can collaborate and pool their expertise to make higher education accessible for all

How easy do you find it to navigate your way around your university’s virtual estate – its websites, virtual learning environment and other digital aspects? If the answer is “not very”, we suspect you may not be alone. And for those of us who might access it differently – without a mouse, for example, or through a screen reader or keyboard emulator – the challenge is multiplied. Digital accessibility is the wide-ranging work to make these challenges a thing of the past for everyone. It is a legal requirement and a moral imperative.

Make Things Accessible is the outcome of a collaboration, initially between the University of Westminster and UCL, but now incorporating many other universities. It is a community of practice, a website and an archive of resources. It aims to make things accessible for all.

 

Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.


OpenAI’s Deep Research Agent Is Coming for White-Collar Work — from wired.com by Will Knight
The research-focused agent shows how a new generation of more capable AI models could automate some office tasks.

Isla Fulford, a researcher at OpenAI, had a hunch that Deep Research would be a hit even before it was released.

Fulford had helped build the artificial intelligence agent, which autonomously explores the web, deciding for itself what links to click, what to read, and what to collate into an in-depth report. OpenAI first made Deep Research available internally; whenever it went down, Fulford says, she was inundated with queries from colleagues eager to have it back. “The number of people who were DMing me made us pretty excited,” says Fulford.

Since going live to the public on February 2, Deep Research has proven to be a hit with many users outside the company too.


Nvidia to open quantum computing research center in Boston — from seekingalpha.com by Ravikash Bakolia

Nvidia (NASDAQ:NVDA) will open a quantum computing research lab in Boston which is expected to start operations later this year.

The Nvidia Accelerated Quantum Research Center, or NVAQC, will integrate leading quantum hardware with AI supercomputers, enabling what is known as accelerated quantum supercomputing, said the company in a March 18 press release.

Nvidia’s CEO Jensen Huang also made this announcement on Thursday at the company’s first-ever Quantum Day at its annual GTC event.


French quantum computer firm Pasqal links up with NVIDIA — from reuters.com

PARIS, March 21 (Reuters) – Pasqal, a fast-growing French quantum computer start-up company, announced on Friday a partnership with chip giant Nvidia (NVDA.O), opens new tab whereby Pasqal’s customers would gain access to more tools to develop quantum applications.

Pasqal said it would connect its quantum computing units and cloud platform onto NVIDIA’s open-source platform called CUDA-Q.


Introducing next-generation audio models in the API — from openai.com
A new suite of audio models to power voice agents, now available to developers worldwide.

Today, we’re launching new speech-to-text and text-to-speech audio models in the API—making it possible to build more powerful, customizable, and intelligent voice agents that offer real value. Our latest speech-to-text models set a new state-of-the-art benchmark, outperforming existing solutions in accuracy and reliability—especially in challenging scenarios involving accents, noisy environments, and varying speech speeds. These improvements increase transcription reliability, making the models especially well-suited for use cases like customer call centers, meeting note transcription, and more.


 

AI Can’t Fix Bad Learning — from nafez.substack.com by Nafez Dakkak
Why pedagogy and good learning design still come first, and why faster isn’t always better.

I’ve followed Dr. Philippa Hardman’s work for years, and every time I engage with her work, I find it both refreshing and deeply grounded.

As one of the leading voices in learning design, Philippa has been able to cut through the noise and focus on what truly matters: designing learning experiences that actually work.

In an era where AI promises speed and scale, Philippa is making a different argument: faster isn’t always better. As the creator of Epiphany AI—figma for learning designers—Philippa is focused on closing the gap between what great learning design should look like and what’s actually being delivered.

While many AI tools optimize for the average, she believes the future belongs to those who can leverage AI without compromising on expertise or quality. Philippa wants learning designers to be more ambitious using AI to achieve what wasn’t possible before.

In this conversation, we explore why pedagogy must lead technology, how the return on expertise is only increasing in an AI-driven world, and why building faster doesn’t always mean building better.

An excerpted graphic:




Pearson, AWS Collaborate to Enhance AI-Powered Learning Functionality — from cloudwars.com

Pearson, the global educational publisher, and AWS have expanded their existing partnership to enhance AI-driven learning. AWS will help Pearson to deliver AI-powered lesson generation and more for educators, support workforce skilling initiatives, and continue an ongoing collaboration with Pearson VUE for AWS certification.


 

Introducing NextGenAI: A consortium to advance research and education with AI — from openai.com; via Claire Zau
OpenAI commits $50M in funding and tools to leading institutions.

Today, we’re launching NextGenAI, a first-of-its-kind consortium with 15 leading research institutions dedicated to using AI to accelerate research breakthroughs and transform education.

AI has the power to drive progress in research and education—but only when people have the right tools to harness it. That’s why OpenAI is committing $50M in research grants, compute funding, and API access to support students, educators, and researchers advancing the frontiers of knowledge.

Uniting institutions across the U.S. and abroad, NextGenAI aims to catalyze progress at a rate faster than any one institution would alone. This initiative is built not only to fuel the next generation of discoveries, but also to prepare the next generation to shape AI’s future.


 ‘I want him to be prepared’: why parents are teaching their gen Alpha kids to use AI — from theguardian.com by Aaron Mok; via Claire Zau
As AI grows increasingly prevalent, some are showing their children tools from ChatGPT to Dall-E to learn and bond

“My goal isn’t to make him a generative AI wizard,” White said. “It’s to give him a foundation for using AI to be creative, build, explore perspectives and enrich his learning.”

White is part of a growing number of parents teaching their young children how to use AI chatbots so they are prepared to deploy the tools responsibly as personal assistants for school, work and daily life when they’re older.

 

ABA Tech Survey Finds Growing Adoption of AI in Legal Practice, with Efficiency Gains as Primary Driver — from lawnext.com by Bob Ambrogi

There has been a significant increase in the adoption of artificial intelligence-based tools among law firms, with 30% of respondents now using AI technology compared to just 11% in 2023, according to the just-released 2024 edition of the American Bar Association’s Legal Technology Survey Report.

It finds that time savings and increased efficiency remain the dominant perceived benefits of AI implementation in legal practice.

The report, published Wednesday by the ABA’s Legal Technology Resource Center, is based on a survey that gathered responses from 512 attorneys in private practice across various firm sizes.


Also re: the legal world:

Why The GC-Legal Ops Partnership Is More Critical Than Ever — from abovethelaw.com by Stephanie Corey
The future of in-house legal isn’t about working in silos — it’s about working together.

AI is rapidly changing the way legal work gets done. Legal Ops can help GCs navigate this shift — identifying the right tools, ensuring responsible AI adoption, and optimizing processes so legal teams can focus on high-value work.
 

All of the articles listed below are from edutopia.org


4 Ways to Boost Students’ Self-Efficacy — by Tyler Rablin
These strategies help students see what they have learned so they believe they can be successful in school in the future.

Self-efficacy, on the other hand, focuses on outcomes to drive beliefs. In essence, self-efficacy involves intentionally providing students with evidence of early success to help them build the belief that they can be successful in the future.

Self-efficacy has been a powerful focus for me because it helps me to be more intentional as a teacher. It requires me to be mindful of how I structure assessments, feedback, etc., to provide students with evidence of their successes early on to help them see potential future successes.

Maintaining Students’ Focus in the Spring — by Miriam Plotinsky
Teachers can use these small ‘upgrades’ or tweaks to their regular practices to help keep students focused and involved.

As a recent MindShift article notes, “The second semester brings a lot of potential challenges to teachers’ regularly scheduled programming because of standardized testing, graduation events, and student burnout.” These challenges may be complex, but they are not insurmountable. With subtle shifts in practice, classrooms can remain safe and productive spaces—even in the spring.

She recommends a “coping jar” to help students identify moments of angst and manage their feelings. Fagell explains that when students find an effective coping mechanism, they write their idea on one side of a Popsicle stick and explain how it works on the other side. As students add ideas to the jar, they have an increased awareness of coping strategies and the importance of helping one another.

10 Picture Books That Showcase Collaboration — by Kristin Rydholm
These entertaining stories feature collaboration and social-emotional skills to highlight the benefit of working together to accomplish a goal.

Are you an early childhood teacher in search of relationship-building resources to help unify your classroom? Have I got a book list for you! In the verbiage of 1970s infomercials: “This collection has everything!” The main characters are early childhood students, the setting is the school, and each plot requires the class to work collaboratively. The characters are united in curiosity, determination, and mission to work on accomplishing projects together that they couldn’t possibly do alone.

What a 30-Day Break From AI Taught Me About My Teaching — by James Bedford
Using AI became second nature for this educator. A month without the tools gave him an opportunity to pause, reflect, and recalibrate.

My goal is to try to preserve the messy, creative core of what it means to be an educator: curiosity, critical thinking, and the grit and energy to solve problems without always resorting to shortcuts—technological or otherwise. My hope is that my students will do the same.

Unplug and reconnect: Challenge yourself and your students to embark on an AI detox. Step away from AI tools where possible, and rediscover the power of human creativity and independent thought. Start a journal to document the journey, exploring questions like “How does it feel to rely solely on my own intellect?” or “What challenges arise when AI isn’t there to assist? This activity can foster resilience, raise self-awareness, and cultivate a deeper appreciation for the human capacity to think, create, problem-solve, and innovate.

 

Drive Continuous Learning: AI Integrates Work & Training — from learningguild.com by George Hanshaw

Imagine with me for a moment: Training is no longer confined to scheduled sessions in a classroom, an online module or even a microlearning you click to activate during your workflow. Imagine training being delivered because the system senses what you are doing and provides instructions and job aids without you having to take an action.

The rapid evolution of artificial intelligence (AI) and wearable technology has made it easier than ever to seamlessly integrate learning directly into the workflow. Smart glasses, earpieces, and other advanced devices are redefining how employees gain knowledge and skills by delivering microlearning moments precisely when and where they are needed.

AI plays a crucial role in this transformation by sensing the optimal moment to deliver the training through augmented reality (AR).



These Schools Are Banding Together to Make Better Use of AI in Education — from edsurge.com by Emily Tate Sullivan

Kennelly and Geraffo are part of a small team at their school in Denver, DSST: College View High School, that is participating in the School Teams AI Collaborative, a year-long pilot initiative in which more than 80 educators from 19 traditional public and charter schools across the country are experimenting with and evaluating AI-enabled instruction to improve teaching and learning.

The goal is for some of AI’s earliest adopters in education to band together, share ideas and eventually help lead the way on what they and their colleagues around the U.S. could do with the emerging technology.

“Pretty early on we thought it was going to be a massive failure,” says Kennelly of last semester’s project. “But it became a huge hit. Students loved it. They were like, ‘I ran to second period to build this thing.’”



Transactional vs. Conversational Visions of Generative AI in Teaching — from elmartinsen.substack.com by Eric Lars Martinsen
AI as a Printer, or AI as a Thought Partner

As writing instructors, we have a choice in how we frame AI for our students. I invite you to:

  1. Experiment with AI as a conversation partner yourself before introducing it to students
  2. Design assignments that leverage AI’s strengths as a thought partner rather than trying to “AI-proof” your existing assignments
  3. Explicitly teach students how to engage in productive dialogue with AI—how to ask good questions, challenge AI’s assumptions, and use it to refine rather than replace their thinking
  4. Share your experiences, both positive and negative, with colleagues to build our collective understanding of effective AI integration

 

2025 Survey of College and University Presidents
Learn about presidents’ takes on topics such as financial confidence, the 2024 election’s impact on higher ed & more.

Inside Higher Ed’s 2025 Survey of College and University Presidents was conducted by Hanover Research. The survey asked presidents from 298 public and private, largely nonprofit two- and four-year institutions timely questions on the following issues:

  • General financial and economic confidence, plus mergers and acquisitions
  • Politics, policy and the 2024 election’s impact on higher education
  • Public perceptions of higher ed and the value of a degree
  • Campus speech
  • Race on campus
  • Artificial intelligence
  • Environmental sustainability goals
  • Campus health and wellness, including student mental health
  • Management, governance and the hardest part about being a president
 

You can now use Deep Research without $200 — from flexos.work


Accelerating scientific breakthroughs with an AI co-scientist — from research.google by Juraj Gottweis and Vivek Natarajan

We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.


Now decides next: Generating a new future — from Deloitte.com
Deloitte’s State of Generative AI in the Enterprise Quarter four report

There is a speed limit. GenAI technology continues to advance at incredible speed. However, most organizations are moving at the speed of organizations, not at the speed of technology. No matter how quickly the technology advances—or how hard the companies producing GenAI technology push—organizational change in an enterprise can only happen so fast.

Barriers are evolving. Significant barriers to scaling and value creation are still widespread across key areas. And, over the past year regulatory uncertainty and risk management have risen in organizations’ lists of concerns to address. Also, levels of trust in GenAI are still moderate for the majority of organizations. Even so, with increased customization and accuracy of models—combined with a focus on better governance— adoption of GenAI is becoming more established.

Some uses are outpacing others. Application of GenAI is further along in some business areas than in others in terms of integration, return on investment (ROI) and expectations. The IT function is most mature; cybersecurity, operations, marketing and customer service are also showing strong adoption and results. Organizations reporting higher ROI for their most scaled initiatives are broadly further along in their GenAI journeys.

 

5 Legal Tech Trends Set to Impact Law Firms in 2025 — from programminginsider.com by Marc Berman

The legal industry is experiencing swift changes, with technology becoming an ever more crucial factor in its evolution. As law firms respond to shifting client demands and regulatory changes, the pace of change is accelerating. Embracing legal tech is no longer just an advantage; it’s a necessity.

According to a Forbes report, 66% of legal leaders acknowledge this trend and intend to boost their investments in legal tech moving forward. From artificial intelligence streamlining workflows to cloud computing enabling globalized legal services, the legal landscape is undergoing a digital revolution.

In this article, we’ll explore five key legal tech trends that will define how law firms operate in 2025.


GenAI, Legal Ops, and The Future of Law Firms: A Wake-Up Call? — from echlawcrossroads.com by Stephen Embry

A new study from the Blickstein Group reveals some distributing trends for law firms that represent businesses, particularly large ones. The Study is entitled  Legal Service Delivery in the Age of AI. The Study was done jointly by FTI Technologies, a consulting group, and Blickstein. It looks at law department legal operations.

The Findings

GenAI Use by Legal Ops Personnel

The responses reflect a bullish view of what GenAI can do in the legal marketplace but also demonstrate GenAi has a ways to go:

  • Almost 80% of the respondents think that GenAI will become an “essential part of the legal profession.
  • 81% believe GenAi will drive improved efficiencies
  • Despite this belief, only some 30% have plans to purchase GenAI tools. For 81%, the primary reason for obtaining and using GenAI tools is the efficiencies these tools bring.
  • 52% say their GenAI strategy is not as sophisticated as they would like or nonexistent.

The biggest barrier to the use of GenAI among the legal ops professions is cost and security concerns and the lack of skilled personnel available to them.


Voting Is Closed, Results Are In: Here are the 15 Legal Tech Startups Selected for the 2025 Startup Alley at ABA TECHSHOW — from lawnext.com by Bob Ambrogi

Voting has now closed and your votes have been tallied to pick the 15 legal tech startups that will get to participate as finalists in the ninth-annual Startup Alley at ABA TECHSHOW 2025, taking place April 2-5 in Chicago.

These 15 finalists will face off in an opening-night pitch competition that is the opening event of TECHSHOW, with the conference’s attendees voting at the conclusion of the pitches to pick the top winners.


Balancing innovation and ethics: Applying generative AI in legal work — from legal.thomsonreuters.com

Generative artificial intelligence (GenAI) has brought a new wave of opportunities to the legal profession, opening doors to greater efficiency and innovation. Its rapid development has also raised questions about its integration within the legal industry. As legal professionals are presented with more options for adopting new technologies, they now face the important task of understanding how GenAI can be seamlessly — and ethically — incorporated into their daily operations.


Emerging Trends in Court Reporting for 2025: Legal Technology and Advantages for Law Firms — from jdsupra.com

The court reporting industry is evolving rapidly, propelled by technological advancements and the increasing demand for efficiency in the legal sector. For 2025, trends such as artificial intelligence (AI), real-time transcription technologies, and data-driven tools are reshaping how legal professionals work. Here’s an overview of these emerging trends and five reasons law firms should embrace these advancements.


 
© 2025 | Daniel Christian