Eight Legal Tech Trends Set To Impact Law Firms In 2025 — from forbes.com by Daniel Farrar

Trends To Watch This Year

1. A Focus On Client Experience And Technology-Driven Client Services
2. Evolution Of Pricing Models In Legal Services
3. Cloud Computing, Remote Work, Globalization And Cross-Border Legal Services
4. Legal Analytics And Data-Driven Decision Making
5. Automation Of Routine Legal Tasks
6. Integration Of Artificial Intelligence
7. AI In Mergers And Acquisitions
8. Cybersecurity And Data Privacy


The Future of Legal Tech Jobs: Trends, Opportunities, and Skills for 2025 and Beyond — from jdjournal.com by Maria Lenin Laus

This guide explores the top legal tech jobs in demand, key skills for success, hiring trends, and future predictionsfor the legal industry. Whether you’re a lawyer, law student, IT professional, or business leader, this article will help you navigate the shifting terrain of legal tech careers.

Top Legal Tech Hiring Trends for 2025

1. Law Firms Are Prioritizing Tech Skills
Over 65% of law firms are hiring legal tech experts over traditional attorneys.
AI implementation, automation, and analytics skills are now must-haves.
2. In-House Legal Teams Are Expanding Legal Tech Roles
77% of corporate legal teams say tech expertise is now mandatory.
More companies are investing in contract automation and legal AI tools.
3. Law Schools Are Adding Legal Tech Courses
Institutions like Harvard and Stanford now offer AI and legal tech curriculums.
Graduates with legal tech skills gain a competitive advantage.


Legal tech predictions for 2025: What’s next in legal innovation? — from jdsupra.com

  1. Collaboration tools reshape communication and documentation
  2. From chatbots to ‘AI agents’: The next evolution
  3. Governance AI frameworks take center stage
  4. Local governments drive AI accountability
  5. Continuous growing legal fees and ROI become a primary focus

Meet Ivo, The Legal AI That Will Review Your Contracts — from forbes.com by David Prosser

Contract reviews and negotiations are the bread-and-butter work of many corporate lawyers, but artificial intelligence (AI) promises to transform every aspect of the legal profession. Legaltech start-up Ivo, which is today announcing a $16 million Series A funding round, wants to make manual contract work a thing of the past.

“We help in-house legal teams to red-line and negotiate contract agreements more quickly and easily,” explains Min-Kyu Jung, CEO and co-founder of Ivo. “It’s a challenge that couldn’t be solved well by AI until relatively recently, but the evolution of generative AI has made it possible.”


A&O Shearman, Cooley Leading Legal Tech Investment at Law Firms — from news.bloomberglaw.com by Evan Ochsner

  • Leading firms are investing their own resources in legal tech
  • Firms seek to tailor tech development to specific functions
 

DeepSeek: How China’s AI Breakthrough Could Revolutionize Educational Technology — from nickpotkalitsky.substack.com by Nick Potkalitsky
Can DeepSeek’s 90% efficiency boost make AI accessible to every school?

The most revolutionary aspect of DeepSeek for education isn’t just its cost—it’s the combination of open-source accessibility and local deployment capabilities. As Azeem Azhar notes, “R-1 is open-source. Anyone can download and run it on their own hardware. I have R1-8b (the second smallest model) running on my Mac Mini at home.”

Real-time Learning Enhancement

  • AI tutoring networks that collaborate to optimize individual learning paths
  • Immediate, multi-perspective feedback on student work
  • Continuous assessment and curriculum adaptation

The question isn’t whether this technology will transform education—it’s how quickly institutions can adapt to a world where advanced AI capabilities are finally within reach of every classroom.


Over 100 AI Tools for Teachers — from educatorstechnology.com by Med Kharbach, PhD

I know through your feedback on my social media and blog posts that several of you have legitimate concerns about the impact of AI in education, especially those related to data privacy, academic dishonesty, AI dependence, loss of creativity and critical thinking, plagiarism, to mention a few. While these concerns are valid and deserve careful consideration, it’s also important to explore the potential benefits AI can bring when used thoughtfully.

Tools such as ChatGPT and Claude are like smart research assistants that are available 24/7 to support you with all kinds of tasks from drafting detailed lesson plans, creating differentiated materials, generating classroom activities, to summarizing and simplifying complex topics. Likewise, students can use them to enhance their learning by, for instance, brainstorming ideas for research projects, generating constructive feedback on assignments, practicing problem-solving in a guided way, and much more.

The point here is that AI is here to stay and expand, and we better learn how to use it thoughtfully and responsibly rather than avoid it out of fear or skepticism.


Beth’s posting links to:

 


Derek’s posting on LinkedIn


From Theory to Practice: How Generative AI is Redefining Instructional Materials — from edtechinsiders.substack.com by Alex Sarlin
Top trends and insights from The Edtech Insiders Generative AI Map research process about how Generative AI is transforming Instructional Materials

As part of our updates to the Edtech Insiders Generative AI Map, we’re excited to release a new mini market map and article deep dive on Generative AI tools that are specifically designed for Instructional Materials use cases.

In our database, the Instructional Materials use case category encompasses tools that:

  • Assist educators by streamlining lesson planning, curriculum development, and content customization
  • Enable educators or students to transform materials into alternative formats, such as videos, podcasts, or other interactive media, in addition to leveraging gaming principles or immersive VR to enhance engagement
  • Empower educators or students to transform text, video, slides or other source material into study aids like study guides, flashcards, practice tests, or graphic organizers
  • Engage students through interactive lessons featuring historical figures, authors, or fictional characters
  • Customize curriculum to individual needs or pedagogical approaches
  • Empower educators or students to quickly create online learning assets and courses

On a somewhat-related note, also see:


 

DeepSeek hits the scene — MUCH too early to say how this open-source platform will play out here in the United States. Things are tense between the U.S. and Chian.

10 WILD Deepseek demos — from theneurondaily.com

Over the last week, pretty much everyone in the AI space has been losing their minds over Deepseek R1. The open source community has been loving it, the closed source tech giants have been less than loving it, and even the mainstream media is starting to pick up on how last week’s R1 launch was a big deal

We’ve been trying to understand just how powerful R1 really is, so we rounded up everything we could find that shows off just what this little AI side project can do.

Here’s some WILD demos of what people have done with Deepseek R1 so far:



Is DeepSeek the new DeepMind? — from ai-supremacy.com by Michael Spencer
AI supremacy isn’t just about compute or U.S. leadership, it’s about how you work to make models more efficient and improve their accessibility for everyone.

Over the last week especially but over the last month generally, the AI Zeitgeist is flooding with what DeepSeek’s R1 means for the larger ecosystem and the future of AI as a whole. See some articles I’m reading on DeepSeek here (Google Doc).

It’s an important moment in so far as everything from export controls to AI Infrastructure, to capex spend or AI talent moats are being put into question.



 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland
Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.

What is Google Learn about?
Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.

Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.

The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.

 

Five key issues to consider when adopting an AI-based legal tech — from legalfutures.co.uk by Mark Hughes

As more of our familiar legal resources have started to embrace a generative AI overhaul, and new players have come to the market, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.

  • Licensing
  • Data protection
  • The data sets
  • …and others

Knowable Introduces Gen AI Tool It Says Will Revolutionize How Companies Interact with their Contracts — from lawnext.com by Bob Ambrogi

Knowable, a legal technology company specializing in helping organizations bring order and organization to their executed agreements, has announced Ask Knowable, a suite of generative AI-powered tools aimed at transforming how legal teams interact with and understand what is in their contracts.

Released today as a commercial preview and set to launch for general availability in March 2025, the feature marks a significant step forward in leveraging large language models to address the complexities of contract management, the company says.


The Global Legal Post teams up with LexisNexis to explore challenges and opportunities of Gen AI adoption — from globallegalpost.com by
Series of articles will investigate key criteria to consider when investing in Gen AI

The Global Legal Post has teamed up with LexisNexis to help inform readers’ decision-making in the selection of generative AI (Gen AI) legal research solutions.

The Generative AI Legal Research Hub in association with LexisNexis will host a series of articles exploring the key criteria law firms and legal departments should consider when seeking to harness the power of Gen AI to improve the delivery of legal services.


Leveraging AI to Grow Your Legal Practice — from americanbar.org

Summary

  • AI-powered tools like chat and scheduling meet clients’ demand for instant, personalized service, improving engagement and satisfaction.
  • Firms using AI see up to a 30% increase in lead conversion, cutting client acquisition costs and maximizing marketing investments.
  • AI streamlines processes, speeds up response times, and enhances client engagement—driving growth and long-term client retention.

How a tech GC views AI-enabled efficiencies and regulation — from legaldive.com by Justin Bachman
PagerDuty’s top in-house counsel sees legal AI tools as a way to scale resources without adding headcount while focusing lawyers on their high-value work.


Innovations in Legal Practice: How Tim Billick’s Firm Stays Ahead with AI and Technology — from techtimes.com by Elena McCormick

Enhancing Client Service through Technology
Beyond internal efficiency, Billick’s firm utilizes technology to improve client communication and engagement. By adopting client-facing AI tools, such as chatbots for routine inquiries and client portals for real-time updates, Practus makes legal processes more transparent and accessible to its clients. According to Billick, this responsiveness is essential in IP law, where clients often need quick updates and answers to time-sensitive questions about patents, trademarks, and licensing agreements.

AI-driven client management software is also part of the firm’s toolkit, enabling Billick and his team to track each client’s case progress and share updates efficiently. The firm’s technology infrastructure supports clients from various sectors, including engineering, software development, and consumer products, tailoring case workflows to meet unique needs within each industry. “Clients appreciate having immediate access to their case status, especially in industries where timing is crucial,” Billick shares.


New Generative AI Study Highlights Adoption, Use and Opportunities in the Legal Industry — from prnewswire.com by Relativity

CHICAGO, Nov. 12, 2024 /PRNewswire/ — Relativity, a global legal technology company, today announced findings from the IDC InfoBrief, Generative AI in Legal 2024, commissioned by Relativity. The study uncovers the rapid increase of generative AI adoption in the legal field, examining how legal professionals are navigating emerging challenges and seizing opportunities to drive legal innovation.

The international study surveyed attorneys, paralegals, legal operations professionals and legal IT professionals from law firms, corporations and government agencies. Respondents were located in Australia, Canada, Ireland, New Zealand, the United Kingdom and the United States. The data uncovered important trends on how generative AI has impacted the legal industry and how legal professionals will use generative AI in the coming years.

 

A Code-Red Leadership Crisis: A Wake-Up Call for Talent Development — from learningguild.com by Dr. Arika Pierce Williams

This company’s experience offers three crucial lessons for other organizational leaders who may be contemplating cutting or reducing talent development investments in their 2025 budgets to focus on “growth.”

  1. Leadership development isn’t a luxury – it’s a strategic imperative…
  2. Succession planning must be an ongoing process, not a reactive measure…
  3. The cost of developing leaders is far less than the cost of not having them when you need them most…

Also from The Learning Guild, see:

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

  1. AI-driven course design
  2. Hyper-personalized content curation
  3. Immersive scenario-based training
  4. Smart chatbots
  5. Wearable devices
 

Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.

After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.

They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.

Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.


A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester
Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.

The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.

  1. Understand the impact…
  2. Understand the different types of AI tools…
  3. Focus on institutional data and knowledge repositories…

Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.

Also from Educause, see:


AI school opens – learners are not good or bad but fast and slow — from donaldclarkplanb.blogspot.com by Donald Clark

That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.

For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.


Agentic AI Named Top Tech Trend for 2025 — from campustechnology.com by David Ramel

Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.

More realistic challenges that the firm has listed elsewhere include:

    • Agentic AI proliferating without governance or tracking;
    • Agentic AI making decisions that are not trustworthy;
    • Agentic AI relying on low-quality data;
    • Employee resistance; and
    • Agentic-AI-driven cyberattacks enabling “smart malware.”

Also from campustechnology.com, see:


Three items from edcircuit.com:


All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly
Looking for specific solutions at the conference exhibit hall, with an educator focus

Here are some notable trends:

  • Alignment with campus policies: …
  • Choose your own AI adventure: …
  • Integrate AI throughout a workflow: …
  • Moving from prompt engineering to bot building: …
  • More complex problem-solving: …


Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.

The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.


 

 

From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 

Voice and Trust in Autonomous Learning Experiences — from learningguild.com by Bill Brandon

This article seeks to apply some lessons from brand management to learning design at a high level. Throughout the rest of this article, it is essential to remember that the context is an autonomous, interactive learning experience. The experience is created adaptively by Gen AI or (soon enough) by agents, not by rigid scripts. It may be that an AI will choose to present prewritten texts or prerecorded videos from a content library according to the human users’ responses or questions. Still, the overall experience will be different for each user. It will be more like a conversation than a book.

In summary, while AI chatbots have the potential to enhance learning experiences, their acceptance and effectiveness depend on several factors, including perceived usefulness, ease of use, trust, relational factors, perceived risk, and enjoyment. 

Personalization and building trust are essential for maintaining user engagement and achieving positive learning outcomes. The right “voice” for autonomous AI or a chatbot can enhance trust by making interactions more personal, consistent, and empathetic.

 

Legal budgets will get an AI-inspired makeover in 2025: survey — from legaldive.com by Justin Bachman
Nearly every general counsel is budgeting to add generative AI tools to their departments – and they’re all expecting to realize efficiencies by doing so.

Dive Brief:

  • Nearly all general counsel say their budgets are up slightly after wrestling with widespread cuts last year. And most of them, 61%, say they expect slightly larger budgets next year as well, an average of 5% more, according to the 2025 In-House Legal Budgeting Report from Axiom and Wakefield Research. Technology was ranked as the top in-house investment priority for both 2024 and 2025 for larger companies.
  • Legal managers predict their companies will boost investment on technology and real estate/facilities in 2025, while reducing outlays for human resources and mergers and acquisition activity, according to the survey. This mix of changing priorities might disrupt legal budgets.
  • Among the planned legal tech spending, the top three areas for investment are virtual legal assistants/AI-powered chatbots (35%); e-billing and spend-management software (31%); and contract management platforms (30%).
 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 
© 2025 | Daniel Christian