Multiple Countries Just Issued Travel Warnings for the U.S. — from mensjournal.com by Rachel Dillin
In a rare reversal, several of America’s closest allies are now warning their citizens about traveling to the U.S., and it could impact your next trip.

For years, the U.S. has issued cautionary travel advisories to citizens heading overseas. But in a surprising twist, the roles have flipped. Several countries, including longtime allies like Australia, Canada, and the U.K., are now warning their citizens about traveling to the United States, according to Yahoo.

Australia updated its advisory in June, flagging gun violence, civil protests, and unpredictable immigration enforcement. While its guidance remains at Level 1 (“exercise normal safety precautions”), Australian officials urged travelers to stay alert in crowded places like malls, transit hubs, and public venues. They also warned about the Visa Waiver Program, noting that U.S. authorities can deny entry without explanation.

From DSC:
I’ve not heard of a travel warning against the U.S. in my lifetime. Thanks Trump. Making America Great Again. Sure thing….

 

 

Get yourself unstuck: overthinking is boring and perfectionism is a trap — from timeshighereducation.com by David Thompson
The work looks flawless, the student seems fine. But underneath, perfectionism is doing damage. David Thompson unpacks what educators can do to help high-performing students navigate the pressure to succeed and move from stuck to started

That’s why I encourage imperfection, messiness and play and build these ideas into how I teach.

These moments don’t come from big breakthroughs. They come from removing pressure and replacing it with permission.

 

The EU’s Legal Tech Tipping Point – AI Regulation, Data Sovereignty, and eDiscovery in 2025 — from jdsupra.com by Melina Efstathiou

The Good, the Braver and the Curious.
As we navigate through 2025, the European legal landscape is undergoing a significant transformation, particularly in the realms of artificial intelligence (AI) regulation and data sovereignty. These changes are reshaping how legal departments and more specifically eDiscovery professionals operate, compelling them to adapt to new compliance requirements and technological advancements.

Following on from our blog post on Navigating eDisclosure in the UK and Practice Direction 57AD, we are now moving on to explore AI regulation in the greater European spectrum, taking a contrasting glance towards the UK and the US as well, at the close of this post.


LegalTech’s Lingering Hurdles: How AI is Finally Unlocking Efficiency in the Legal Sector — from techbullion.co by Abdul Basit

However, as we stand in mid-2025, a new paradigm is emerging. Artificial Intelligence, once a buzzword, is now demonstrably addressing many of the core issues that have historically plagued LegalTech adoption and effectiveness, ushering in an era of unprecedented efficiency. Legal tech specialists like LegalEase are leading the way with some of these newer solutions, such as Ai powered NDA drafting.

Here’s how AI is making profound efficiencies:

    • Automated Document Review and Analysis:
    • Intelligent Contract Lifecycle Management (CLM):
    • Enhanced Legal Research:
    • Predictive Analytics for Litigation and Risk:
    • Streamlined Practice Management and Workflow Automation:
    • Personalized Legal Education and Training:
    • Improved Client Experience:

The AI Strategy Potluck: Law Firms Showing Up Empty-Handed, Hungry, And Weirdly Proud Of It — from abovethelaw.com by Joe Patrice
There’s a $32 billion buffet of time and money on the table, and the legal industry brought napkins.

The Thomson Reuters “Future of Professionals” report(Opens in a new window) just dropped and one stat standing out among its insights is that organizations with a visible AI strategy are not only twice as likely to report growth, they’re also 3.5 times more likely to see actual, tangible benefits from AI adoption.

AI Adoption Strategies


Speaking of legal-related items as well as tech, also see:

  • Landmark AI ruling is a blow to authors and artists — from popular.info by Judd Legum
    This week, a federal judge, William Alsup, rejected Anthropic’s effort to dismiss the case and found that stealing books from the internet is likely a copyright violation. A trial will be scheduled in the future. If Anthropic loses, each violation could come with a fine of $750 or more, potentially exposing the company to billions in damages. Other AI companies that use stolen work to train their models — and most do — could also face significant liability.
 

AI will kill billable hour, says lawtech founder — from lawgazette.co.uk by John Hyde

A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.

Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.

Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.

Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.

‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’


Speaking of legaltech-related items, also see:

Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies — from lawnext.com by Bob Ambrogi

The generative AI legal startup Harvey has entered into a strategic alliance with LexisNexis Legal & Professional by which it will integrate LexisNexis’ gen AI technology, primary law content, and Shepard’s Citations within the Harvey platform and jointly develop advanced legal workflows.

As a result of the partnership, Harvey’s customers working within its platform will be able to ask questions of LexisNexis Protégé, the AI legal assistant released in January, and receive AI-generated answers grounded in the LexisNexis collection of U.S. case law and statutes and validated through Shepard’s Citations, the companies said.

 
 

Cultivating a responsible innovation mindset among future tech leaders — from timeshighereducation.com by Andreas Alexiou from the University of Southampton
The classroom is a perfect place to discuss the messy, real-world consequences of technological discoveries, writes Andreas Alexiou. Beyond ‘How?’, students should be asking ‘Should we…?’ and ‘What if…?’ questions around ethics and responsibility

University educators play a crucial role in guiding students to think about the next big invention and its implications for privacy, the environment and social equity. To truly make a difference, we need to bring ethics and responsibility into the classroom in a way that resonates with students. Here’s how.

Debating with industry pioneers on incorporating ethical frameworks in innovation, product development or technology adoption is eye-opening because it can lead to students confronting assumptions they hadn’t questioned before.

Students need more than just skills; they need a mindset that sticks with them long after graduation. By making ethics and responsibility a key part of the learning process, educators are doing more than preparing students for a career; they’re preparing them to navigate a world shaped by their choices.

 

A Rippling Townhouse Facade by Alex Chinneck Takes a Seat in a London Square — from thisiscolossal.com by Alex Chinneck and Kate Mothes

 

Cultivating a responsible innovation mindset among future tech leaders — from timeshighereducation.com by Andreas Alexiou
The classroom is a perfect place to discuss the messy, real-world consequences of technological discoveries, writes Andreas Alexiou. Beyond ‘How?’, students should be asking ‘Should we…?’ and ‘What if…?’ questions around ethics and responsibility

University educators play a crucial role in guiding students to think about the next big invention and its implications for privacy, the environment and social equity. To truly make a difference, we need to bring ethics and responsibility into the classroom in a way that resonates with students. Here’s how.

Debating with industry pioneers on incorporating ethical frameworks in innovation, product development or technology adoption is eye-opening because it can lead to students confronting assumptions they hadn’t questioned before. For example, students could discuss the roll-out of emotion-recognition software. Many assume it’s neutral, but guest speakers from industry can highlight how cultural and racial biases are baked into design decisions.

Leveraging alumni networks and starting with short virtual Q&A sessions instead of full lectures can work well.


Are we overlooking the power of autonomy when it comes to motivating students? — from timeshighereducation.com by Danny Oppenheimer
Educators fear giving students too much choice in their learning will see them making the wrong decisions. But structuring choice without dictating the answers could be the way forward

So, how can we get students to make good decisions while still allowing them agency to make their own choices, maintaining the associated motivational advantages that agency provides? One possibility is to use choice architecture, more commonly called “nudges”: structuring choices in ways that scaffold better decisions without dictating them.

Higher education rightly emphasises the importance of belonging and mastery, but when it ignores autonomy – the third leg of the motivational tripod – the system wobbles. When we allow students to decide for themselves how they’ll engage with their coursework, they consistently rise to the occasion. They choose to challenge themselves, perform better academically and enjoy their education more.

 

AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice
Minnesota Legal Studies Research Paper No. 25-16; March 02, 2025; from papers.ssrn.com by:

Daniel Schwarcz
University of Minnesota Law School

Sam Manning
Centre for the Governance of AI

Patrick Barry
University of Michigan Law School

David R. Cleveland
University of Minnesota Law School

J.J. Prescott
University of Michigan Law School

Beverly Rich
Ogletree Deakins

Abstract

Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.


Guest post: How technological innovation can boost growth — from legaltechnology.com by Caroline Hill

One key change is the growing adoption of technology within legal service providers, and this is transforming the way firms operate and deliver value to clients.

The legal services sector’s digital transformation is gaining momentum, driven both by client expectations as well as the potential for operational efficiency. With the right support, legal firms can innovate through tech adoption and remain competitive to deliver strong client outcomes and long-term growth.


AI Can Do Many Tasks for Lawyers – But Be Careful — from nysba.org by Rebecca Melnitsky

Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.

The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.

For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.


Fresh Voices on Legal Tech with Nikki Shaver — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Nikki Shaver

Determining which legal technology is best for your law firm can seem like a daunting task, so Legaltech Hub does the hard work for you! In another edition of Fresh Voices, Dennis and Tom talk with Nikki Shaver, CEO at Legaltech Hub, about her in-depth knowledge of technology and AI trends. Nikki shares what effective tech strategies should look like for attorneys and recommends innovative tools for maintaining best practices in modern law firms. Learn more at legaltechnologyhub.com.


AI for in-house legal: 2025 predictions — from deloitte.com
Our expectations for AI engagement and adoption in the legal Market over the coming year.

AI will continue to transform in-house legal departments in 2025
As we enter 2025, over two-thirds of organisations plan to increase their Generative AI (GenAI) investments, providing legal teams with significant executive support and resources to further develop this Capabilities. This presents a substantial opportunity for legal departments, particularly as GenAI technology continues to advance at an impressive pace. We make five predictions for AI engagement and adoption in the legal Market over the coming year and beyond.


Navigating The Fine Line: Redefining Legal Advice In The Age Of Tech With Erin Levine And Quinten Steenhuis — from abovethelaw.com by Olga V. Mack
The definition of ‘practicing law’ is outdated and increasingly irrelevant in a tech-driven world. Should the line between legal advice and legal information even exist?

Practical Takeaways for Legal Leaders

  • Use Aggregated Data: Providing consumers with benchmarks (e.g., “90% of users in your position accepted similar settlements”) empowers them without giving direct legal advice.
  • Train and Supervise AI Tools: AI works best when it’s trained on reliable, localized data and supervised by legal professionals.
  • Partner with Courts: As Quinten pointed out, tools built in collaboration with courts often avoid UPL pitfalls. They’re also more likely to gain the trust of both regulators and consumers.
  • Embrace Transparency: Clear disclaimers like “This is not legal advice” go a long way in building consumer trust and meeting ethical standards.

 

 

Google I/O 2025: From research to reality — from blog.google
Here’s how we’re making AI more helpful with Gemini.


Google I/O 2025 LIVE — all the details about Android XR smart glasses, AI Mode, Veo 3, Gemini, Google Beam and more — from tomsguide.com by Philip Michaels
Google’s annual conference goes all in on AI

With a running time of 2 hours, Google I/O 2025 leaned heavily into Gemini and new models that make the assistant work in more places than ever before. Despite focusing the majority of the keynote around Gemini, Google saved its most ambitious and anticipated announcement towards the end with its big Android XR smart glasses reveal.

Shockingly, very little was spent around Android 16. Most of its Android 16 related news, like the redesigned Material 3 Expressive interface, was announced during the Android Show live stream last week — which explains why Google I/O 2025 was such an AI heavy showcase.

That’s because Google carved out most of the keynote to dive deeper into Gemini, its new models, and integrations with other Google services. There’s clearly a lot to unpack, so here’s all the biggest Google I/O 2025 announcements.


Our vision for building a universal AI assistant— from blog.google
We’re extending Gemini to become a world model that can make plans and imagine new experiences by simulating aspects of the world.

Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that’s intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device.

By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action.


Learn in newer, deeper ways with Gemini — from blog.google.com by Ben Gomes
We’re infusing LearnLM directly into Gemini 2.5 — plus more learning news from I/O.

At I/O 2025, we announced that we’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles. Educators and pedagogy experts preferred Gemini 2.5 Pro over other offerings across a range of learning scenarios, both for supporting a user’s learning goals and on key principles of good pedagogy.


Gemini gets more personal, proactive and powerful — from blog.google.com by Josh Woodward
It’s your turn to create, learn and explore with an AI assistant that’s starting to understand your world and anticipate your needs.

Here’s what we announced at Google IO:

  • Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.
  • Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.
  • Veo 3, our new, state-of-the-art video generation model, comes built in and is the first in the world to have native support for sound effects, background noises and dialogue between characters.
  • Deep Research and Canvas are getting their biggest updates yet, unlocking new ways to analyze information, create podcasts and vibe code websites and apps.
  • Gemini is coming to Chrome, so you can ask questions while browsing the web.
  • Students around the world can easily make interactive quizzes, and college students in the U.S., Brazil, Indonesia, Japan and the UK are eligible for a free school year of the Google AI Pro plan.
  • Google AI Ultra, a new premium plan, is for the pioneers who want the highest rate limits and early access to new features in the Gemini app.
  • 2.5 Flash has become our new default model, and it blends incredible quality with lightning fast response times.

Fuel your creativity with new generative media models and tools — from by Eli Collins
Introducing Veo 3 and Imagen 4, and a new tool for filmmaking called Flow.


AI in Search: Going beyond information to intelligence
We’re introducing new AI features to make it easier to ask any question in Search.

AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.

This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.

In this story:

  • AI Mode in Search
  • Deep Search
  • Live capabilities
  • Agentic capabilities
  • Shopping
  • Personal context
  • Custom charts

 

 

‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg
Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator

Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.

This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.


In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.


Make Instructional Design Fun Again with AI Agents — from drphilippahardman.substack.com by Dr. Philippa Hardman
A special edition practical guide to selecting & building AI agents for instructional design and L&D

Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”

In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.

Here’s a sneak peak….


AI Personality Matters: Why Claude Doesn’t Give Unsolicited Advice (And Why You Should Care) — from mikekentz.substack.com by Mike Kentz
First in a four-part series exploring the subtle yet profound differences between AI systems and their impact on human cognition

After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.

Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself: “Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.

The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.


Also re: metacognition and AI, see:

In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.

By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.


EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine
A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.

Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”

Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.


Partly Cloudy with a Chance of Chatbots — from derekbruff.org by Derek Bruff

Here are the predictions, our votes, and some commentary:

  • “By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
  • “Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
  • “Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
  • “AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
  • “AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.

‘We Have to Really Rethink the Purpose of Education’
The Ezra Klein Show

Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?

And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?

Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.


 

From AI avatars to virtual reality crime scenes, courts are grappling with AI in the justice system — from whec.com by Rio Yamat
The family of a man who died in a road rage shooting incident played a video showing a likeness of him generated with AI.

Defense attorney Jason Lamm won’t be handling the appeal, but said a higher court will likely be asked to weigh in on whether the judge improperly relied on the AI-generated video when sentencing his client.

Courts across the country have been grappling with how to best handle the increasing presence of artificial intelligence in the courtroom. Even before Pelkey’s family used AI to give him a voice for the victim impact portion — believed to be a first in U.S. courts — the Arizona Supreme Court created a committee that researches best AI practices.

In Florida, a judge recently donned a virtual reality headset meant to show the point of view of a defendant who said he was acting in self-defense when he waved a loaded gun at wedding guests. The judge rejected his claim.

Experts say using AI in courtrooms raises legal and ethical concerns, especially if it’s used effectively to sway a judge or jury. And they argue it could have a disproportionate impact on marginalized communities facing prosecution.

AI can be very persuasive, Harris said, and scholars are studying the intersection of the technology and manipulation tactics.


Poll: 1 in 3 would let an AI lawyer represent them — from robinai.com

April 29 2025: A major new survey, from legal intelligence platform Robin AI, has revealed a severe lack of trust in the legal industry. Just 1 in 10 people across the US and UK said they fully trust law firms, but while increasingly open to AI-powered legal services, few are ready to let technology take over without human oversight.

Perspectus Global polled a representative sample of 4,152 people across both markets. An overwhelming majority see Big Law as “expensive”, “elitist” or “intimidating” but only 30% of respondents would allow a robot lawyer — that is, an AI system acting alone — to represent them in a legal matter. On average, respondents said they would need a 57% discount to choose an AI lawyer over a human.

.



Harvey Made Legal Tech Cool Enough for Silicon Valley to Care Again — from businessinsider.com by Melia Russell

In just three years, the company, which builds software for analyzing and drafting documents using legally tuned large language models, has drawn blue-chip law firms, Silicon Valley investors, and a stampede of rivals hoping to catch its momentum. Harvey has raised over half a billion dollars in capital, sending its valuation soaring to $3 billion.

 

GPT, Claude, Gemini, Grok… Wait, Which One Do I Use Again? — from thebrainyacts.beehiiv.com by Josh Kubicki
Brainyacts #263

So this edition is simple: a quick, practical guide to the major generative AI models available in 2025 so far. What they’re good at, what to use them for, and where they might fit into your legal work—from document summarization to client communication to research support.

From DSC:
This comprehensive, highly informational posting lists what the model is, its strengths, the best legal use cases for it, and responsible use tips as well.


What’s Happening in LegalTech Other than AI? — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell

Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more.

Later, the guys answer a listener’s question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated.


LawDroid Founder Tom Martin on Building, Teaching and Advising About AI for Legal — from lawnext.com by Bob Ambrogi and Tom Martin

If you follow legal tech at all, you would be justified in suspecting that Tom Martin has figured out how to use artificial intelligence to clone himself.

While running LawDroid, his legal tech company, the Vancouver-based Martin also still manages a law practice in California, oversees an annual legal tech awards program, teaches a law school course on generative AI, runs an annual AI conference, hosts a podcast, and recently launched a legal tech consultancy.

In January 2023, less than two months after ChatGPT first launched, Martin’s company was one of the first to launch a gen AI assistant specifically for lawyers, called LawDroid Copilot. He has since also launched LawDroid Builder, a no-code platform for creating custom AI agents.


Legal training in the age of AI: A leadership imperative — from thomsonreuters.com by The Hon. Maritza Dominguez Braswell  U.S. Magistrate Judge / District of Colorado

In a profession that’s actively contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart


Unlocking the power of AI: Opportunities and use cases for law firms — from todaysconveyancer.co.uk

Artificial intelligence (AI) is here. And it’s already reshaping the way law firms operate. Whether automating repetitive tasks, improving risk management, or boosting efficiency, AI presents a genuine opportunity for forward-thinking legal practices. But with new opportunities come new responsibilities. And as firms explore AI tools, it’s essential they consider how to govern them safely and ethically. That’s where an AI policy becomes indispensable.

So, what can AI actually do for your firm right now? Let’s take a closer look.

 

Values in the wild: Discovering and analyzing values in real-world language model interactions — from anthropic.com

In the latest research paper from Anthropic’s Societal Impacts team, we describe a practical way we’ve developed to observe Claude’s values—and provide the first large-scale results on how Claude expresses those values during real-world conversations. We also provide an open dataset for researchers to run further analysis of the values and how often they arise in conversations.

Per the Rundown AI

Why it matters: AI is increasingly shaping real-world decisions and relationships, making understanding their actual values more crucial than ever. This study also moves the alignment discussion toward more concrete observations, revealing that AI’s morals and values may be more contextual and situational than a static point of view.

Also from Anthropic, see:

Anthropic Education Report: How University Students Use Claude


Adobe Firefly: The next evolution of creative AI is here — from blog.adobe.com

In just under two years, Adobe Firefly has revolutionized the creative industry and generated more than 22 billion assets worldwide. Today at Adobe MAX London, we’re unveiling the latest release of Firefly, which unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities.

The new Firefly features enhanced models, improved ideation capabilities, expanded creative options, and unprecedented control. This update builds on earlier momentum when we introduced the Firefly web app and expanded into video and audio with Generate Video, Translate Video, and Translate Audio features.

Per The Rundown AI (here):

Why it matters: OpenAI’s recent image generator and other rivals have shaken up creative workflows, but Adobe’s IP-safe focus and the addition of competing models into Firefly allow professionals to remain in their established suite of tools — keeping users in the ecosystem while still having flexibility for other model strengths.

 
© 2025 | Daniel Christian