8 AI Chatbots for Teachers: A Simple Guide and Quick Tips – Class Tech Tips
— from classtechtips.com by Dr. Monica Burns
In A Mega Deal, Clio Buys vLex for $1 Billion, Merging AI, Research and Practice Management — from lawnext.com by Bob Ambrogi
In a landmark deal that will undoubtedly reshape the legal tech landscape, law practice management company Clio has signed a definitive agreement to acquire the AI and legal research company vLex for $1 billion in cash and stock.
The companies say that the acquisition will “establish a new category of intelligent legal technology at the intersection of the business and practice of law, empowering legal professionals to seamlessly manage, research, and execute legal work within a unified system.”
Transform Public Speaking with Yoodli: Your AI Coach — from rdene915.com by Paula Johnson
Yoodli is an AI tool designed to help users improve their public speaking skills. It analyzes your speech in real-time or after a recording and gives you feedback on things like:
-
- Filler words (“um,” “like,” “you know”)
- Pacing (Are you sprinting or sedating your audience?)
- Word choice and sentence complexity
- Eye contact and body language (with video)
- And yes, even your “uhhh” to actual word ratio
Yoodli gives you a transcript and a confidence score, plus suggestions that range from helpful to brutally honest. It’s basically Simon Cowell with AI ethics and a smiley face interface.
[What’s] going on with AI and education? — from theneuron.ai by Grant Harvey
With students and teachers alike using AI, schools are facing an “assessment crisis” where the line between tool and cheating has blurred, forcing a shift away from a broken knowledge economy toward a new focus on building human judgment through strategic struggle.
What to do about it: The future belongs to the “judgment economy,” where knowledge is commoditized but taste, agency, and learning velocity become the new human moats. Use the “Struggle-First” principle: wrestle with problems for 20-30 minutes before turning to AI, then use AI as a sparring partner (not a ghostwriter) to deepen understanding. The goal isn’t to avoid AI, but to strategically choose when to embrace “desirable difficulties” that build genuine expertise versus when to leverage AI for efficiency.
…
The Alpha-School Program in brief:
-
- Students complete core academics in just 2 hours using AI tutors, freeing up 4+ hours for life skills, passion projects, and real-world experiences.
- The school claims students learn at least 2x faster than their peers in traditional school.
- The top 20% of students show 6.5x growth. Classes score in the top 1-2% nationally across the board.
- Claims are based on NWEA’s Measures of Academic Progress (MAP) assessments… with data only available to the school. Hmm…
Austen Allred shared a story about the school, which put it on our radar.
Featured Report: Teaching for Tomorrow: Unlocking Six Weeks a Year With AI — from gallup.com
.
.
In the latest installment of Gallup and the Walton Family Foundation’s research on education, K-12 teachers reveal how AI tools are transforming their workloads, instructional quality and classroom optimism. The report finds that 60% of teachers used an AI tool during the 2024–25 school year. Weekly AI users report reclaiming nearly six hours per week — equivalent to six weeks per year — which they reinvest in more personalized instruction, deeper student feedback and better parent communication.
Despite this emerging “AI dividend,” adoption is uneven: 40% of teachers aren’t using AI at all, and only 19% report their school has a formal AI policy. Teachers with access to policies and support save significantly more time.
Educators also say AI improves their work. Most report higher-quality lesson plans, assessments and student feedback. And teachers who regularly use AI are more optimistic about its benefits for student engagement and accessibility — mirroring themes from the Voices of Gen Z: How American Youth View and Use Artificial Intelligence report, which found students hesitant but curious about AI’s classroom role. As AI tools grow more embedded in education, both teachers and students will need the training and support to use them effectively.
Also see:
- 2-Hour Learning
- What if children could crush academics in 2 hours, 2x faster?
- What if children could get back their most valuable resource, which is time?
- What if children could pursue the things they want during their afternoons and develop life skills?
Amira Learning: Teaching With The AI-Powered Reading Tool — from techlearning.com by Erik Ofgang
Amira Learning is a research-backed AI reading tutor that incorporates the science of reading into its features.
What Is Amira Learning?
Amira Learning’s system is built upon research led by Jack Mostow, a professor at Carnegie Mellon who helped pioneer AI literacy education. Amira uses Claude AI to power its AI features, but these features are different than many other AI tools on the market. Instead of focusing on chat and generative response, Amira’s key feature is its advanced speech recognition and natural language processing capabilities, which allow the app to “hear” when a student is struggling and tailor suggestions to that student’s particular mistakes.
Though it’s not meant to replace a teacher, Amira provides real-time feedback and also helps teachers pinpoint where a student is struggling. For these reasons, Amira Learning is a favorite of education scientists and advocates for science of reading-based literacy instruction. The tool currently is used by more than 4 million students worldwide and across the U.S.
2025 Learning System Top Picks — from elearninfo247.com by Craig Weiss
Who is leading the pack? Who is setting themselves apart here in the mid-year?
Are they an LMS? LMS/LXP? Talent Development System? Mentoring? Learning Platform?
Something else?
Are they solely customer training/education, mentoring, or coaching? Are they focused only on employees? Are they an amalgamation of all or some?
Well, they cut across the board – hence, they slide under the “Learning Systems” umbrella, which is under the bigger umbrella term – “Learning Technology.”
…
Categories: L&D-specific, Combo (L&D and Training, think internal/external audiences), and Customer Training/Education (this means customer education, which some vendors use to mean the same as customer training).
Thoughts on thinking — from dcurt.is by Dustin Curtis
Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.
The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself.
Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them
Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.
First, the easy stuff.
Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.
Also see:
Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others
This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.
One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.
It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.
…
The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:
What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.
The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures
On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.
Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.
Of course, that would be a mistake.
We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.
Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng
By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.
This AI Model Never Stops Learning — from link.wired.com by Will Knight
Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.
The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.
The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.
Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing
Highlights:
- Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
- Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI — from papers.ssrn.com by Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, and Terrence Sejnowski; via George Siemens
Abstract
In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities. This chapter offers the first neuroscience-based explanation for the observed reversal of the Flynn Effect—the recent decline in IQ scores in developed countries—linking this downturn to shifts in educational practices and the rise of cognitive offloading via AI and digital tools. Drawing on insights from neuroscience, cognitive psychology, and learning theory, we explain how underuse of the brain’s declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility. Finally, we outline policy implications for education, workforce development, and the responsible integration of AI, advocating strategies that harness technology as a complement to – rather than a replacement for – robust human knowledge.
Keywords
cognitive offloading, memory, neuroscience of learning, declarative memory, procedural memory, generative AI, Flynn Effect, education reform, schemata, digital tools, cognitive load, cognitive architecture, reinforcement learning, basal ganglia, working memory, retrieval practice, schema theory, manifolds
Mary Meeker AI Trends Report: Mind-Boggling Numbers Paint AI’s Massive Growth Picture — from ndtvprofit.com
Numbers that prove AI as a tech is unlike any other the world has ever seen.
Here are some incredibly powerful numbers from Mary Meeker’s AI Trends report, which showcase how artificial intelligence as a tech is unlike any other the world has ever seen.
- AI took only three years to reach 50% user adoption in the US; mobile internet took six years, desktop internet took 12 years, while PCs took 20 years.
- ChatGPT reached 800 million users in 17 months and 100 million in only two months, vis-à-vis Netflix’s 100 million (10 years), Instagram (2.5 years) and TikTok (nine months).
- ChatGPT hit 365 billion annual searches in two years (2024) vs. Google’s 11 years (2009)—ChatGPT 5.5x faster than Google.
Above via Mary Meeker’s AI Trend-Analysis — from getsuperintel.com by Kim “Chubby” Isenberg
How AI’s rapid rise, efficiency race, and talent shifts are reshaping the future.
The TLDR
Mary Meeker’s new AI trends report highlights an explosive rise in global AI usage, surging model efficiency, and mounting pressure on infrastructure and talent. The shift is clear: AI is no longer experimental—it’s becoming foundational, and those who optimize for speed, scale, and specialization will lead the next wave of innovation.
Also see Meeker’s actual report at:
Trends – Artificial Intelligence — from bondcap.com by Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey
The Rundown: Meta aims to release tools that eliminate humans from the advertising process by 2026, according to a report from the WSJ — developing an AI that can create ads for Facebook and Instagram using just a product image and budget.
The details:
- Companies would submit product images and budgets, letting AI craft the text and visuals, select target audiences, and manage campaign placement.
- The system will be able to create personalized ads that can adapt in real-time, like a car spot featuring mountains vs. an urban street based on user location.
- The push would target smaller companies lacking dedicated marketing staff, promising professional-grade advertising without agency fees or skillset.
- Advertising is a core part of Mark Zuckerberg’s AI strategy and already accounts for 97% of Meta’s annual revenue.
Why it matters: We’re already seeing AI transform advertising through image, video, and text, but Zuck’s vision takes the process entirely out of human hands. With so much marketing flowing through FB and IG, a successful system would be a major disruptor — particularly for small brands that just want results without the hassle.
The AI-enhanced learning ecosystem: A case study in collaborative innovation — from chieflearningofficer.com by Kevin Jennings
How artificial intelligence can serve as a tool and collaborative partner in reimagining content development and management.
Learning and development professionals face unprecedented challenges in today’s rapidly evolving business landscape. According to LinkedIn’s 2025 Workplace Learning Report, 67 percent of L&D professionals report being “maxed out” on capacity, while 66 percent have experienced budget reductions in the past year.
Despite these constraints, 87 percent agree their organizations need to develop employees faster to keep pace with business demands. These statistics paint a clear picture of the pressure L&D teams face: do more, with less, faster.
This article explores how one L&D leader’s strategic partnership with artificial intelligence transformed these persistent challenges into opportunities, creating a responsive learning ecosystem that addresses the modern demands of rapid product evolution and diverse audience needs. With 71 percent of L&D professionals now identifying AI as a high or very high priority for their learning strategy, this case study demonstrates how AI can serve not merely as a tool but as a collaborative partner in reimagining content development and management.
.
How we use GenAI and AR to improve students’ design skills — from timeshighereducation.com by Antonio Juarez, Lesly Pliego and Jordi Rábago who are professors of architecture at Monterrey Institute of Technology in Mexico; Tomas Pachajoa is a professor of architecture at the El Bosque University in Colombia; & Carlos Hinrichsen and Marietta Castro are educators at San Sebastián University in Chile.
Guidance on using generative AI and augmented reality to enhance student creativity, spatial awareness and interdisciplinary collaboration
Blend traditional skills development with AI use
For subjects that require students to develop drawing and modelling skills, have students create initial design sketches or models manually to ensure they practise these skills. Then, introduce GenAI tools such as Midjourney, Leonardo AI and ChatGPT to help students explore new ideas based on their original concepts. Using AI at this stage broadens their creative horizons and introduces innovative perspectives, which are crucial in a rapidly evolving creative industry.
Provide step-by-step tutorials, including both written guides and video demonstrations, to illustrate how initial sketches can be effectively translated into AI-generated concepts. Offer example prompts to demonstrate diverse design possibilities and help students build confidence using GenAI.
Integrating generative AI and AR consistently enhanced student engagement, creativity and spatial understanding on our course.
How Texas is Preparing Higher Education for AI — from the74million.org by Kate McGee
TX colleges are thinking about how to prepare students for a changing workforce and an already overburdened faculty for new challenges in classrooms.
“It doesn’t matter if you enter the health industry, banking, oil and gas, or national security enterprises like we have here in San Antonio,” Eighmy told The Texas Tribune. “Everybody’s asking for competency around AI.”
It’s one of the reasons the public university, which serves 34,000 students, announced earlier this year that it is creating a new college dedicated to AI, cyber security, computing and data science. The new college, which is still in the planning phase, would be one of the first of its kind in the country. UTSA wants to launch the new college by fall 2025.
But many state higher education leaders are thinking beyond that. As AI becomes a part of everyday life in new, unpredictable ways, universities across Texas and the country are also starting to consider how to ensure faculty are keeping up with the new technology and students are ready to use it when they enter the workforce.
In the Room Where It Happens: Generative AI Policy Creation in Higher Education — from er.educause.edu by Esther Brandon, Lance Eaton, Dana Gavin, and Allison Papini
To develop a robust policy for generative artificial intelligence use in higher education, institutional leaders must first create “a room” where diverse perspectives are welcome and included in the process.
Q&A: Artificial Intelligence in Education and What Lies Ahead — from usnews.com by Sarah Wood
Research indicates that AI is becoming an essential skill to learn for students to succeed in the workplace.
Q: How do you expect to see AI embraced more in the future in college and the workplace?
I do believe it’s going to become a permanent fixture for multiple reasons. I think the national security imperative associated with AI as a result of competing against other nations is going to drive a lot of energy and support for AI education. We also see shifts across every field and discipline regarding the usage of AI beyond college. We see this in a broad array of fields, including health care and the field of law. I think it’s here to stay and I think that means we’re going to see AI literacy being taught at most colleges and universities, and more faculty leveraging AI to help improve the quality of their instruction. I feel like we’re just at the beginning of a transition. In fact, I often describe our current moment as the ‘Ask Jeeves’ phase of the growth of AI. There’s a lot of change still ahead of us. AI, for better or worse, it’s here to stay.
AI-Generated Podcasts Outperform Textbooks in Landmark Education Study — form linkedin.com by David Borish
A new study from Drexel University and Google has demonstrated that AI-generated educational podcasts can significantly enhance both student engagement and learning outcomes compared to traditional textbooks. The research, involving 180 college students across the United States, represents one of the first systematic investigations into how artificial intelligence can transform educational content delivery in real-time.
What can we do about generative AI in our teaching? — from linkedin.com by Kristina Peterson
So what can we do?
- Interrogate the Process: We can ask ourselves if we I built in enough checkpoints. Steps that can’t be faked. Things like quick writes, question floods, in-person feedback, revision logs.
- Reframe AI: We can let students use AI as a partner. We can show them how to prompt better, revise harder, and build from it rather than submit it. Show them the difference between using a tool and being used by one.
- Design Assignments for Curiosity, Not Compliance: Even the best of our assignments need to adapt. Mine needs more checkpoints, more reflective questions along the way, more explanation of why my students made the choices they did.
Teachers Are Not OK — from 404media.co by Jason Koebler
The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.
One thing is clear: teachers are not OK.
…
In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear “don’t use generative AI” from a prof but then log on to the university’s Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It’s inconsistent and confusing.
I am sick to my stomach as I write this because I’ve spent 20 years developing a pedagogy that’s about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It’s demoralizing.
The 2025 Global Skills Report— from coursera.org
Discover in-demand skills and credentials trends across 100+ countries and six regions to deliver impactful industry-aligned learning programs.
GenAI adoption fuels global skill demands
In 2023, early adopters flocked to GenAI, with approximately one person per minute enrolling in a GenAI course on Coursera —a rate that rose to eight per minute in 2024. Since then, GenAI has continued to see exceptional growth, with global enrollment in GenAI courses surging 195% year-over-year—maintaining its position as one of the most rapidly growing skill domains on our platform. To date, Coursera has recorded over 8 million GenAI enrollments, with 12 learners per minute signing up for GenAI content in 2025 across our catalog of nearly 700 GenAI courses.
Driving this surge, 94% of employers say they’re likely to hire candidates with GenAI credentials, while 75% prefer hiring less-experienced candidates with GenAI skills over more experienced ones without these capabilities.8 Demand for roles such as AI and Machine Learning Specialists is projected to grow by up to 40% in the next four years.9 Mastering AI fundamentals—from prompt engineering to large language model (LLM) applications—is essential to remaining competitive in today’s rapidly evolving economy.
Countries leading our new AI Maturity Index— which highlights regions best equipped to harness AI innovation and translate skills into real-world applications—include global frontrunners such as Singapore, Switzerland, and the United States.
Insights in action
Businesses
Integrate role-specific GenAI modules into employee development programs, enabling teams to leverage AI for efficiency and innovation.
Governments
Scale GenAI literacy initiatives—especially in emerging economies—to address talent shortages and foster human-machine capabilities needed to future-proof digital jobs.
Higher education
Embed credit-eligible GenAI learning into curricula, ensuring graduates enter the workforce job-ready.
Learners
Focus on GenAI courses offering real-world projects (e.g., prompt engineering) that help build skills for in-demand roles.
AI & Schools: 4 Ways Artificial Intelligence Can Help Students — from the74million.org by W. Ian O’Byrne
AI creates potential for more personalized learning
I am a literacy educator and researcher, and here are four ways I believe these kinds of systems can be used to help students learn.
- Differentiated instruction
- Intelligent textbooks
- Improved assessment
- Personalized learning
5 Skills Kids (and Adults) Need in an AI World — from oreilly.com by Raffi Krikorian
Hint: Coding Isn’t One of Them
Five Essential Skills Kids Need (More than Coding)
I’m not saying we shouldn’t teach kids to code. It’s a useful skill. But these are the five true foundations that will serve them regardless of how technology evolves.
- Loving the journey, not just the destination
- Being a question-asker, not just an answer-getter
- Trying, failing, and trying differently
- Seeing the whole picture
- Walking in others’ shoes
The AI moment is now: Are teachers and students ready? — from iblnews.org
Day of AI Australia hosted a panel discussion on 20 May, 2025. Hosted by Dr Sebastian Sequoiah-Grayson (Senior Lecturer in the School of Computer Science and Engineering, UNSW Sydney) with panel members Katie Ford (Industry Executive – Higher Education at Microsoft), Tamara Templeton (Primary School Teacher, Townsville), Sarina Wilson (Teaching and Learning Coordinator – Emerging Technology at NSW Department of Education) and Professor Didar Zowghi (Senior Principal Research Scientist at CSIRO’s Data61).
Teachers using AI tools more regularly, survey finds — from iblnews.org
As many students face criticism and punishment for using artificial intelligence tools like ChatGPT for assignments, new reporting shows that many instructors are increasingly using those same programs.
Addendum on 5/28/25:
A Museum of Real Use: The Field Guide to Effective AI Use — from mikekentz.substack.com by Mike Kentz
Six Educators Annotate Their Real AI Use—and a Method Emerges for Benchmarking the Chats
Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.
With the right approach, a transcript becomes something else:
- A window into student decision-making
- A record of how understanding evolves
- A conversation that can be interpreted and assessed
- An opportunity to evaluate content understanding
This week, I’m excited to share something that brings that idea into practice.
Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.
This Field Guide is the first move in that direction.
AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice
Minnesota Legal Studies Research Paper No. 25-16; March 02, 2025; from papers.ssrn.com by:
Daniel Schwarcz
University of Minnesota Law School
Sam Manning
Centre for the Governance of AI
Patrick Barry
University of Michigan Law School
David R. Cleveland
University of Minnesota Law School
J.J. Prescott
University of Michigan Law School
Beverly Rich
Ogletree Deakins
Abstract
Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.
Guest post: How technological innovation can boost growth — from legaltechnology.com by Caroline Hill
One key change is the growing adoption of technology within legal service providers, and this is transforming the way firms operate and deliver value to clients.
The legal services sector’s digital transformation is gaining momentum, driven both by client expectations as well as the potential for operational efficiency. With the right support, legal firms can innovate through tech adoption and remain competitive to deliver strong client outcomes and long-term growth.
AI Can Do Many Tasks for Lawyers – But Be Careful — from nysba.org by Rebecca Melnitsky
Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.
The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.
For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.
Fresh Voices on Legal Tech with Nikki Shaver — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Nikki Shaver
Determining which legal technology is best for your law firm can seem like a daunting task, so Legaltech Hub does the hard work for you! In another edition of Fresh Voices, Dennis and Tom talk with Nikki Shaver, CEO at Legaltech Hub, about her in-depth knowledge of technology and AI trends. Nikki shares what effective tech strategies should look like for attorneys and recommends innovative tools for maintaining best practices in modern law firms. Learn more at legaltechnologyhub.com.
AI for in-house legal: 2025 predictions — from deloitte.com
Our expectations for AI engagement and adoption in the legal Market over the coming year.
AI will continue to transform in-house legal departments in 2025
As we enter 2025, over two-thirds of organisations plan to increase their Generative AI (GenAI) investments, providing legal teams with significant executive support and resources to further develop this Capabilities. This presents a substantial opportunity for legal departments, particularly as GenAI technology continues to advance at an impressive pace. We make five predictions for AI engagement and adoption in the legal Market over the coming year and beyond.
Navigating The Fine Line: Redefining Legal Advice In The Age Of Tech With Erin Levine And Quinten Steenhuis — from abovethelaw.com by Olga V. Mack
The definition of ‘practicing law’ is outdated and increasingly irrelevant in a tech-driven world. Should the line between legal advice and legal information even exist?
Practical Takeaways for Legal Leaders
- Use Aggregated Data: Providing consumers with benchmarks (e.g., “90% of users in your position accepted similar settlements”) empowers them without giving direct legal advice.
- Train and Supervise AI Tools: AI works best when it’s trained on reliable, localized data and supervised by legal professionals.
- Partner with Courts: As Quinten pointed out, tools built in collaboration with courts often avoid UPL pitfalls. They’re also more likely to gain the trust of both regulators and consumers.
- Embrace Transparency: Clear disclaimers like “This is not legal advice” go a long way in building consumer trust and meeting ethical standards.
Google I/O 2025: From research to reality — from blog.google
Here’s how we’re making AI more helpful with Gemini.
- Google Beam
- Project Astra
- Project Mariner
- Personalization
- AI Mode
- Gemini 2.5
- Gemini app
- Generative media
Google I/O 2025 LIVE — all the details about Android XR smart glasses, AI Mode, Veo 3, Gemini, Google Beam and more — from tomsguide.com by Philip Michaels
Google’s annual conference goes all in on AI
With a running time of 2 hours, Google I/O 2025 leaned heavily into Gemini and new models that make the assistant work in more places than ever before. Despite focusing the majority of the keynote around Gemini, Google saved its most ambitious and anticipated announcement towards the end with its big Android XR smart glasses reveal.
Shockingly, very little was spent around Android 16. Most of its Android 16 related news, like the redesigned Material 3 Expressive interface, was announced during the Android Show live stream last week — which explains why Google I/O 2025 was such an AI heavy showcase.
That’s because Google carved out most of the keynote to dive deeper into Gemini, its new models, and integrations with other Google services. There’s clearly a lot to unpack, so here’s all the biggest Google I/O 2025 announcements.
Our vision for building a universal AI assistant— from blog.google
We’re extending Gemini to become a world model that can make plans and imagine new experiences by simulating aspects of the world.
Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that’s intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device.
By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action.
Learn in newer, deeper ways with Gemini — from blog.google.com by Ben Gomes
We’re infusing LearnLM directly into Gemini 2.5 — plus more learning news from I/O.
At I/O 2025, we announced that we’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles. Educators and pedagogy experts preferred Gemini 2.5 Pro over other offerings across a range of learning scenarios, both for supporting a user’s learning goals and on key principles of good pedagogy.
Gemini gets more personal, proactive and powerful — from blog.google.com by Josh Woodward
It’s your turn to create, learn and explore with an AI assistant that’s starting to understand your world and anticipate your needs.
Here’s what we announced at Google IO:
- Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.
- Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.
- Veo 3, our new, state-of-the-art video generation model, comes built in and is the first in the world to have native support for sound effects, background noises and dialogue between characters.
- Deep Research and Canvas are getting their biggest updates yet, unlocking new ways to analyze information, create podcasts and vibe code websites and apps.
- Gemini is coming to Chrome, so you can ask questions while browsing the web.
- Students around the world can easily make interactive quizzes, and college students in the U.S., Brazil, Indonesia, Japan and the UK are eligible for a free school year of the Google AI Pro plan.
- Google AI Ultra, a new premium plan, is for the pioneers who want the highest rate limits and early access to new features in the Gemini app.
- 2.5 Flash has become our new default model, and it blends incredible quality with lightning fast response times.
Fuel your creativity with new generative media models and tools — from by Eli Collins
Introducing Veo 3 and Imagen 4, and a new tool for filmmaking called Flow.
AI in Search: Going beyond information to intelligence
We’re introducing new AI features to make it easier to ask any question in Search.
AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.
This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.
In this story:
- AI Mode in Search
- Deep Search
- Live capabilities
- Agentic capabilities
- Shopping
- Personal context
- Custom charts
AI prompting secrets EXPOSED — from theneurondaily.com by Grant Harvey
Here are the three best prompting guides:
- Anthropic’s “Prompt Engineering Overview” is a free masterclass that’s worth its weight in gold. Their “constitutional AI prompting” section helped us create a content filter that actually works—unlike the one that kept flagging our coffee bean reviews as “inappropriate.” Apparently “rich body” triggered something…
- OpenAI’s “Cookbook” is like having a Michelin-star chef explain cooking—simple for beginners, but packed with pro techniques. Their JSON formatting examples saved us 3 hours of debugging last week…
- Google’s “Prompt Design Strategies” breaks down complex concepts with clear examples. Their before/after gallery showing how slight prompt tweaks improve results made us rethink everything we knew about getting quality outputs.
Pro tip: Save these guides as PDFs before they disappear behind paywalls. The best AI users keep libraries of these resources for quick reference.
.
My personal review of 10+ AI agents and what actually works — from aiwithallie.beehiiv.com by Allie K. Miller
The AI Agents Report Card you wish your boss gave you.
What you’ll learn in this newsletter:
- Which AI agents actually deliver value right now
- Where even the best agents still fall embarrassingly short
- The surprising truth about those sleek, impressive interfaces
- The economics of delegating to AI (and when it’s worth the premium)
- Five practical takeaways to guide your AI strategy
Employees Keep Their AI-Driven Productivity a Secret — from hrotoday.com; via The Neuron
“To address this, organizations should consider building a sustainable AI governance model, prioritizing transparency, and tackling the complex challenge of AI-fueled imposter syndrome through reinvention. Employers who fail to approach innovation with empathy and provide employees with autonomy run the risk of losing valuable staff and negatively impacting employee productivity.”
Key findings from the report include the following:
- Employees are keeping their productivity gains a secret from their employers. …
- In-office employees may still log in remotely after hours. …
- Younger workers are more likely to switch jobs to gain more flexibility.
AI discovers new math algorithms — from by Zach Mink & Rowan Cheung
PLUS: Anthropic reportedly set to launch new Sonnet, Opus models
The Rundown: Google just debuted AlphaEvolve, a coding agent that harnesses Gemini and evolutionary strategies to craft algorithms for scientific and computational challenges — driving efficiency inside Google and solving historic math problems.
…
Why it matters: Yesterday, we had OpenAI’s Jakub Pachocki saying AI has shown “significant evidence” of being capable of novel insights, and today Google has taken that a step further. Math plays a role in nearly every aspect of life, and AI’s pattern and algorithmic strengths look ready to uncover a whole new world of scientific discovery.
AI agents are set to explode: Reports forecast 45% annual growth rate — from hrexecutive.com by Jill Barth
At the recent HR Executive and Future Talent Council event at Bentley University near Boston, I talked with Top 100 HR Tech Influencer Joey Price about what he’s hearing from HR leaders. Price is president and CEO of Jumpstart HR and executive analyst at Aspect43, Jumpstart HR’s HR?tech research division, and author of a valuable new book, The Power of HR: How to Make an Organizational Impact as a People?Professional.
This puts him solidly at the center of HR’s most relevant conversations. Price described the curiosity he’s hearing from many HR leaders about AI agents, which have become increasingly prominent in recent months.
From The Neuron’s posting entitled, “20-Somethings Break AI”:
Anthropic’s Claude AI just pulled a classic AI move: it completely fabricated a legal citation, and then—plot twist—the lawyers actually used it in court. Awkward.
The scene: A Northern California courtroom, where Anthropic’s lawyers are battling music publishers. Claude decides to get creative and generates a citation out of thin air—complete with a made-up title and nonexistent authors. And get this: their “manual citation check” didn’t even catch it.
It’s like the AI equivalent of writing your homework, but then your dog actually eats it—except in this case, the dog actually wrote the homework, and it doesn’t make any sense. Nor does this metaphor, which is increasingly getting away from us…
This is far from the first time AI has fumbled in the legal arena, and it’s not exclusive to Claude. Earlier this week, another judge absolutely roasted law firms for submitting “bogus AI-generated research.” We seemingly see these headlines every week; here’s all the ones Perplexity and OpenAI Deep Research could dig up (remember, these too could have hallucinations in them!).
Anyway, talk about confidence, right? Feels like more of a lawyer problem than an AI problem. Consider this your friendly reminder to always check your work. Twice.