Half A Million Students Given ChatGPT As CSU System Makes AI History — from forbes.com by Dan Fitzpatrick

The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date.

The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally.

We’re still in the early stages of AI adoption in education, and it is critical that the entire ecosystem—education systems, technologists, educators, and governments—work together to ensure that all students globally have access to AI and develop the skills to use it responsibly

Leah Belsky, VP and general manager of education at OpenAI.




HOW educators can use GenAI – where to start and how to progress — from aliciabankhofer.substack.com by Alicia Bankhofer
Part of 3 of my series: Teaching and Learning in the AI Age

As you read through these use cases, you’ll notice that each one addresses multiple tasks from our list above.

1. Researching a topic for a lesson
2. Creating Tasks For Practice
3. Creating Sample Answers
4. Generating Ideas
5. Designing Lesson Plans
6. Creating Tests
7. Using AI in Virtual Classrooms
8. Creating Images
9. Creating worksheets
10. Correcting and Feedback


 

Also see:

Introducing deep research — from openai.com
An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next.

[On 2/2/25 we launched] deep research in ChatGPT, a new agentic capability that conducts multi-step research on the internet for complex tasks. It accomplishes in tens of minutes what would take a human many hours.

Deep research is OpenAI’s next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst.

Comments/information per The Rundown AI:
The Rundown: OpenAI just launchedDeep Research, a new ChatGPT feature that conducts extensive web research on complex topics and delivers detailed reports with citations in under 30 minutes.

The details:

  • The system uses a specialized version of o3 to analyze text, images, and PDFs across multiple sources, producing comprehensive research summaries.
  • Initial access is limited to Pro subscribers ($200/mo) with 100 queries/month, but if safety metrics remain stable, it will expand to Plus and Team users within weeks.
  • Research tasks take between 5-30 minutes to complete, with users receiving a list of clarifying questions to start and notifications when results are ready.
  • Deep Research achieved a 26.6% on Humanity’s Last Exam, significantly outperforming other AI models like Gemini Thinking (6.2%) and GPT-4o (3.3%).

Why it matters: ChatGPT excels at quick, instant answers, but Deep Research represents the first major consumer attempt at tackling complex tasks that take humans days. Combined with the release of Operator, the landscape is shifting towards longer thinking with autonomous actions — and better results to show for it.

Also see:

The End of Search, The Beginning of OpenAI’s Deep Research — from theaivalley.com by Barsee

The quality of citations are also genuinely advance. Unlike traditional AI-generated sources prone to hallucinations, Deep Research provides legitimate academic references. Clicking a citation often takes users directly to the relevant highlighted text.

In a demo, the agent generated a comprehensive report on iOS and Android app market trends, showcasing its ability to tackle intricate subjects with accuracy.


Top 13 AI insights — from theneurondaily.com

Which links to and discusses Andrej Karpathy’s video at:

.

.

This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their “psychology”, and how to get the best use them in practical applications. I have one “Intro to LLMs” video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version.

 

Eight Legal Tech Trends Set To Impact Law Firms In 2025 — from forbes.com by Daniel Farrar

Trends To Watch This Year

1. A Focus On Client Experience And Technology-Driven Client Services
2. Evolution Of Pricing Models In Legal Services
3. Cloud Computing, Remote Work, Globalization And Cross-Border Legal Services
4. Legal Analytics And Data-Driven Decision Making
5. Automation Of Routine Legal Tasks
6. Integration Of Artificial Intelligence
7. AI In Mergers And Acquisitions
8. Cybersecurity And Data Privacy


The Future of Legal Tech Jobs: Trends, Opportunities, and Skills for 2025 and Beyond — from jdjournal.com by Maria Lenin Laus

This guide explores the top legal tech jobs in demand, key skills for success, hiring trends, and future predictionsfor the legal industry. Whether you’re a lawyer, law student, IT professional, or business leader, this article will help you navigate the shifting terrain of legal tech careers.

Top Legal Tech Hiring Trends for 2025

1. Law Firms Are Prioritizing Tech Skills
Over 65% of law firms are hiring legal tech experts over traditional attorneys.
AI implementation, automation, and analytics skills are now must-haves.
2. In-House Legal Teams Are Expanding Legal Tech Roles
77% of corporate legal teams say tech expertise is now mandatory.
More companies are investing in contract automation and legal AI tools.
3. Law Schools Are Adding Legal Tech Courses
Institutions like Harvard and Stanford now offer AI and legal tech curriculums.
Graduates with legal tech skills gain a competitive advantage.


Legal tech predictions for 2025: What’s next in legal innovation? — from jdsupra.com

  1. Collaboration tools reshape communication and documentation
  2. From chatbots to ‘AI agents’: The next evolution
  3. Governance AI frameworks take center stage
  4. Local governments drive AI accountability
  5. Continuous growing legal fees and ROI become a primary focus

Meet Ivo, The Legal AI That Will Review Your Contracts — from forbes.com by David Prosser

Contract reviews and negotiations are the bread-and-butter work of many corporate lawyers, but artificial intelligence (AI) promises to transform every aspect of the legal profession. Legaltech start-up Ivo, which is today announcing a $16 million Series A funding round, wants to make manual contract work a thing of the past.

“We help in-house legal teams to red-line and negotiate contract agreements more quickly and easily,” explains Min-Kyu Jung, CEO and co-founder of Ivo. “It’s a challenge that couldn’t be solved well by AI until relatively recently, but the evolution of generative AI has made it possible.”


A&O Shearman, Cooley Leading Legal Tech Investment at Law Firms — from news.bloomberglaw.com by Evan Ochsner

  • Leading firms are investing their own resources in legal tech
  • Firms seek to tailor tech development to specific functions
 

LinkedIn’s AI Helps People Hunt for a New Job — from wired.com by Will Knight

LinkedIn is testing a new job-hunting tool that uses a custom large language model to comb through huge quantities of data to help people find prospective roles.

The company believes that artificial intelligence will help users unearth new roles they might have missed in the typical search process.

“The reality is, you don’t find your dream job by checking a set of keywords,” the company’s CEO, Ryan Roslansky, told me in a statement. The new tool, he says, “can help you find relevant jobs you never even knew to search for.”

The move comes as AI continues to change how people use the web.

 

DeepSeek: How China’s AI Breakthrough Could Revolutionize Educational Technology — from nickpotkalitsky.substack.com by Nick Potkalitsky
Can DeepSeek’s 90% efficiency boost make AI accessible to every school?

The most revolutionary aspect of DeepSeek for education isn’t just its cost—it’s the combination of open-source accessibility and local deployment capabilities. As Azeem Azhar notes, “R-1 is open-source. Anyone can download and run it on their own hardware. I have R1-8b (the second smallest model) running on my Mac Mini at home.”

Real-time Learning Enhancement

  • AI tutoring networks that collaborate to optimize individual learning paths
  • Immediate, multi-perspective feedback on student work
  • Continuous assessment and curriculum adaptation

The question isn’t whether this technology will transform education—it’s how quickly institutions can adapt to a world where advanced AI capabilities are finally within reach of every classroom.


Over 100 AI Tools for Teachers — from educatorstechnology.com by Med Kharbach, PhD

I know through your feedback on my social media and blog posts that several of you have legitimate concerns about the impact of AI in education, especially those related to data privacy, academic dishonesty, AI dependence, loss of creativity and critical thinking, plagiarism, to mention a few. While these concerns are valid and deserve careful consideration, it’s also important to explore the potential benefits AI can bring when used thoughtfully.

Tools such as ChatGPT and Claude are like smart research assistants that are available 24/7 to support you with all kinds of tasks from drafting detailed lesson plans, creating differentiated materials, generating classroom activities, to summarizing and simplifying complex topics. Likewise, students can use them to enhance their learning by, for instance, brainstorming ideas for research projects, generating constructive feedback on assignments, practicing problem-solving in a guided way, and much more.

The point here is that AI is here to stay and expand, and we better learn how to use it thoughtfully and responsibly rather than avoid it out of fear or skepticism.


Beth’s posting links to:

 


Derek’s posting on LinkedIn


From Theory to Practice: How Generative AI is Redefining Instructional Materials — from edtechinsiders.substack.com by Alex Sarlin
Top trends and insights from The Edtech Insiders Generative AI Map research process about how Generative AI is transforming Instructional Materials

As part of our updates to the Edtech Insiders Generative AI Map, we’re excited to release a new mini market map and article deep dive on Generative AI tools that are specifically designed for Instructional Materials use cases.

In our database, the Instructional Materials use case category encompasses tools that:

  • Assist educators by streamlining lesson planning, curriculum development, and content customization
  • Enable educators or students to transform materials into alternative formats, such as videos, podcasts, or other interactive media, in addition to leveraging gaming principles or immersive VR to enhance engagement
  • Empower educators or students to transform text, video, slides or other source material into study aids like study guides, flashcards, practice tests, or graphic organizers
  • Engage students through interactive lessons featuring historical figures, authors, or fictional characters
  • Customize curriculum to individual needs or pedagogical approaches
  • Empower educators or students to quickly create online learning assets and courses

On a somewhat-related note, also see:


 

DeepSeek R-1 Explained — from aieducation.substack.com by Claire Zau
A no-nonsense FAQ (for everyone drowning in DeepSeek headlines)

There is a good chance you’re exhausted by the amount of DeepSeek coverage flooding your inbox. Between the headlines and hot takes on X, it’s hard not to have questions: What is DeepSeek? Why is it special? Why is everyone freaking out? What does this mean for the AI ecosystem? Can you explain the tech? Am I allowed to use it?

Let’s break down why exactly it’s such a big deal with some straightforward FAQs:




AI Voice Agents: 2025 Update — from a16z.com (Andreessen Horowitz) by Olivia Moore

Voice is one of the most powerful unlocks for AI application companies. It is the most frequent (and most information-dense) form of human communication, made “programmable” for the first time due to AI.

For consumers, we believe voice will be the first — and perhaps the primary — way people interact with AI. This interaction could take the form of an always-available companion or coach, or by democratizing services, such as language learning, that were previously inaccessible.


The professions upskilling in AI — from by Rachel Cromidas

The use of artificial intelligence at work continues to climb. Twice as many LinkedIn members in the U.S. say they are using AI on the job now compared to 2023, according to the latest Workforce Confidence survey. Meanwhile, at least half of workers say AI skills will help them progress in their careers. Product managers are the most likely to agree AI will give them a boost, while those in healthcare services roles are least likely.

 

The AI Shift Lawyers Can’t Afford to Ignore Brainyacts #260 — from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts from Josh’s Quick Take section (i.e., TL;DR)

As a legal professional, you need to know two things about local AI—right now:

1. It’s Already Here (and You Might Not Know It)
Someone in your organization—maybe on your team—is likely experimenting with local AI. Running AI models directly on personal devices is becoming common (and easier), whether for work projects or productivity hacks.

2. It’s a Game-Changer (If You Use It Right)
You should try it! Local AI offers unmatched privacy and control—a personalized tool that can supercharge your productivity.

Bottom Line: Local AI is here. Whether you’re managing its risks or leveraging its transformative power, you need to take action—now.


Level the playing field: Give consumers access to legally reliable Generative AI — from jordanfurlong.substack.com by Jordan Furlong
If we agree people shouldn’t use ChatGPT for legal issues, what should they use instead? The public deserves a better answer than “Go without.” Here’s how lawyers can provide one.

In May 2023, LexisNexis did something unusual for a company in the legal industry: It posed a question to normal people. “Have you used Generative AI tools like ChatGPT to obtain legal advice or assistance with a legal question?” it asked 1,765 American consumers. In what Lexis called a “stunning” result, more than 27% of them (48% of the 57% who were aware of these tools) said yes.

That was 20 months ago. I can almost guarantee that number is substantially higher now.

We should be aiming for legally reliable general-access LLMs with which people can forge their own legal solutions. “In years to come, the principal role of AI in law will not be to enhance today’s largely unaffordable legal and justice systems,” said Richard Susskind recently. “It will be to place the law in the hands of everyone.” He is exactly right.


The Shifting E-Discovery Landscape: From Artificial Intelligence to Antitrust — from jdsupra.com by Ari Kaplan

The rapid evolution of e-discovery presents both opportunities and challenges for e-discovery teams. Almost half (49%) of the participants in the 2024 E-Discovery Unfiltered report expected the number of litigation matters to increase, while more than a third (34%) expected it to remain the same. 83% expected their outside providers to have sufficient skill in handling most data types. And 71% noted that the amount of data in a typical case has increased in the past year.

As organizations enhance their processes and infrastructure, the expectation to leverage technology to maximize service delivery increases. However, legal professionals must balance innovation with humanity. That was the consensus of a recent webinar for ACEDS, which you can watch here.

 

Law School Trends 2025: AI, Bar Exam Changes & Career Shifts — from jdjournal.com by Maria Lenin Laus
This comprehensive guide explores these significant trends, their implications, and what to expect in the coming years.

AI’s Growing Role in Legal Education
AI-powered platforms are being utilized for legal research, document automation, and predictive analytics. Tools like AI-driven case analysis systems are helping students develop advanced research and drafting capabilities.

3.2. Integration of AI into Law School Curricula

  • AI-Powered Research Labs: Schools are incorporating AI-driven tools to assist students in case law research and document drafting.
  • Ethics and AI Courses: New courses explore the ethical implications and legal ramifications of AI in law practice.
  • AI-Assisted Exam Prep: Intelligent tutoring systems and adaptive learning platforms are enhancing bar exam preparation.

3.3. Future Outlook
By 2030, AI proficiency will be a standard expectation for law graduates. Students who fail to familiarize themselves with AI tools risk falling behind in a technology-driven legal market.

4. The Bar Exam Is Changing: The NextGen Bar Exam
The introduction of the NextGen Bar Exam in 2026 marks a significant shift in how aspiring lawyers are tested. Unlike the traditional exam, this new format emphasizes practical legal skills over rote memorization.

4.1. Key Differences Between the Traditional and NextGen Bar Exam
The NextGen Bar Exam replaces multiple-choice and essay-based testing with performance-based tasks that assess practical legal skills. It aims to better prepare graduates for real-world practice by focusing on essential competencies rather than memorization.

4.2. Future Outlook
By 2028, at least half of U.S. states are expected to adopt the NextGen Bar Exam. Law schools will need to adjust their curricula accordingly to prepare students for a more skills-focused licensing process.


Also see:

 

Introducing Operator — from openai.com
A research preview of an agent that can use its own browser to perform tasks for you. Available to Pro users in the U.S.

Today we’re releasing Operator, an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.

Per the Rundown AI:

“OpenAI just launched Operator, an AI agent that can independently navigate web browsers to complete everyday tasks — marking the company’s first major step into autonomous AI assistants.”



…and speaking of agents/assistants:


DeepSeek shakes the world of AI — from heatherbcooper.substack.com by Heather B. Cooper

DeepSeek: A New AI Powerhouse for Everyday Users

DeepSeek is an advanced AI platform developed by a Chinese startup, offering tools like DeepSeek-R1 (nicknamed “DeepThink”) that rival top models like ChatGPT. Here’s what you need to know:

Key Features

  1. Human-Like Reasoning
  2. Cost-Effective & Open-Source
  3. Web Search Integration

State of AI in 2025 exposed — from theneurondaily.com by Grant Harvey
PLUS: When to use Gemini instead of ChatGPT…

The State of AI Development in 2025…

Late last year, we helped Vellum survey over 1,250 AI builders to understand where AI development is really heading. Spoiler alert: It’s not quite the AI takeover you might expect.

Here’s the surprising truth about AI development in 2025: most companies are still figuring it out.

Only 25.1% of businesses have actually deployed AI in production. Everyone else is split between building proofs of concept (21%), beta testing (14.1%), or still working on their strategy (25%). The rest are somewhere between talking to users and evaluating their initial attempts.

 

Google Workspace enables the future of AI-powered work for every business  — from workspace.google.com

The following AI capabilities will start rolling out to Google Workspace Business customers today and to Enterprise customers later this month:

  • Get AI assistance in Gmail, Docs, Sheets, Meet, Chat, Vids, and more: Do your best work faster with AI embedded in the tools you use every day. Gemini streamlines your communications by helping you summarize, draft, and find information in your emails, chats, and files. It can be a thought partner and source of inspiration, helping you create professional documents, slides, spreadsheets, and videos from scratch. Gemini can even improve your meetings by taking notes, enhancing your audio and video, and catching you up on the conversation if you join late.
  • Chat with Gemini Advanced, Google’s next-gen AI: Kickstart learning, brainstorming, and planning with the Gemini app on your laptop or mobile device. Gemini Advanced can help you tackle complex projects including coding, research, and data analysis and lets you build Gems, your team of AI experts to help with repeatable or specialized tasks.
  • Unlock the power of NotebookLM PlusWe’re bringing the revolutionary AI research assistant to every employee, to help them make sense of complex topics. Upload sources to get instant insights and Audio Overviews, then share customized notebooks with the team to accelerate their learning and onboarding.

And per Evelyn from the Stay Ahead newsletter (at FlexOS)

Google’s Gemini AI is stepping up its game in Google Workspace, bringing powerful new capabilities to your favorite tools like Gmail, Docs, and Sheets:

  • AI-Powered Summaries: Get concise, AI-generated summaries of long emails and documents so you can focus on what matters most.
  • Smart Reply: Gemini now offers context-aware email replies that feel more natural and tailored to your style.
  • Slides and images generation: Gemini in Slides can help you generate new images, summarize your slides, write and rewrite content, and refer to existing Drive files and/or emails.
  • Automated Data Insights: In Google Sheets, Gemini helps create a task tracker, conference agenda, spot trends, suggest formulas, and even build charts with simple prompts.
  • Intelligent Drafting: Google Docs now gets a creativity boost, helping you draft reports, proposals, or blog posts with AI suggestions and outlines.
  • Meeting Assistance: Say goodbye to the awkward AI attendees to help you take notes, now Gemini can natively do that for you – no interruption, no avatar, and no extra attendee. Meet can now also automatically generate captions to lower the language barrier.

Eveyln (from FlexOS) also mentions that CoPilot is getting enhancements too:

Copilot is now included in Microsoft 365 Personal and Family — from microsoft.com

Per Evelyn:

It’s exactly what we predicted: stand-alone AI apps like note-takers and image generators have had their moment, but as the tech giants step in, they’re bringing these features directly into their ecosystems, making them harder to ignore.


Announcing The Stargate Project — from openai.com

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.


Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Adobe’s new AI tool can edit 10,000 images in one click — from theverge.com by  Jess Weatherbed
Firefly Bulk Create can automatically remove, replace, or extend image backgrounds in huge batches.

Adobe is launching new generative AI tools that can automate labor-intensive production tasks like editing large batches of images and translating video presentations. The most notable is “Firefly Bulk Create,” an app that allows users to quickly resize up to 10,000 images or replace all of their backgrounds in a single click instead of tediously editing each picture individually.

 

Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts

We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)

If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.


Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025

AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:

  1. Break your own AI policies before you implement them.
  2. Fund your failures.
  3. Resist the pilot program. …
  4. Host Anti-Tech Tech Talks
  5. …+ several more tips

While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.



Maria Anderson responded to Clay’s posting with this idea:

Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.



Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That’s where the new Grammarly feature comes in.

The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”


AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.

Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.


Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

 

AI Is Unavoidable, Not Inevitable — from marcwatkins.substack.com by Marc Watkins

I had the privilege of moderating a discussion between Josh Eyler and Robert Cummings about the future of AI in education at the University of Mississippi’s recent AI Winter Institute for Teachers. I work alongside both in faculty development here at the University of Mississippi. Josh’s position on AI sparked a great deal of debate on social media:

To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of “it’s very complicated.”

Most importantly, we all deserve some grace here. Dealing with generative AI in education isn’t something any of us asked for. It isn’t normal. It isn’t fixable by purchasing a tool or telling faculty to simply ‘prefer not to’ use AI. It is and will remain unavoidable for virtually every discipline taught at our institutions.

If one good thing happens because of generative AI let it be that it helps us clearly see how truly complicated our existing relationships with machines are now. As painful as this moment is, it might be what we need to help prepare us for a future where machines that mimic reasoning and human emotion refuse to be ignored.


“AI tutoring shows stunning results.”
See below article.


From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time — from blogs.worldbank.org by Martín E. De Simone, Federico Tiberti, Wuraola Mosuro, Federico Manolio, Maria Barron, and Eliot Dikoru

Learning gains were striking
The learning improvements were striking—about 0.3 standard deviations. To put this into perspective, this is equivalent to nearly two years of typical learning in just six weeks. When we compared these results to a database of education interventions studied through randomized controlled trials in the developing world, our program outperformed 80% of them, including some of the most cost-effective strategies like structured pedagogy and teaching at the right level. This achievement is particularly remarkable given the short duration of the program and the likelihood that our evaluation design underestimated the true impact.

Our evaluation demonstrates the transformative potential of generative AI in classrooms, especially in developing contexts. To our knowledge, this is the first study to assess the impact of generative AI as a virtual tutor in such settings, building on promising evidence from other contexts and formats; for example, on AI in coding classes, AI and learning in one school in Turkey, teaching math with AI (an example through WhatsApp in Ghana), and AI as a homework tutor.

Comments on this article from The Rundown AI:

Why it matters: This represents one of the first rigorous studies showing major real-world impacts in a developing nation. The key appears to be using AI as a complement to teachers rather than a replacement — and results suggest that AI tutoring could help address the global learning crisis, particularly in regions with teacher shortages.


Other items re: AI in our learning ecosystems:

  • Will AI revolutionise marking? — from timeshighereducation.com by Rohim Mohammed
    Artificial intelligence has the potential to improve speed, consistency and detail in feedback for educators grading students’ assignments, writes Rohim Mohammed. Here he lists the pros and cons based on his experience
  • Marty the Robot: Your Classroom’s AI Companion — from rdene915.com by Dr. Rachelle Dené Poth
  • Generative Artificial Intelligence: Cautiously Recognizing Educational Opportunities — from scholarlyteacher.com by Todd Zakrajsek, University of North Carolina at Chapel Hill
  • Personal AI — from michelleweise.substack.com by Dr. Michelle Weise
    “Personalized” Doesn’t Have To Be a Buzzword
    Today, however, is a different kind of moment. GenAI is now rapidly evolving to the point where we may be able to imagine a new way forward. We can begin to imagine solutions truly tailored for each of us as individuals, our own personal AI (pAI). pAI could unify various silos of information to construct far richer and more holistic and dynamic views of ourselves as long-life learners. A pAI could become our own personal career navigator, skills coach, and storytelling agent. Three particular areas emerge when we think about tapping into the richness of our own data:

    • Personalized Learning Pathways & Dynamic Skill Assessment: …
    • Storytelling for Employers:…
    • Ongoing Mentorship and Feedback: …
  • Speak — a language learning app — via The Neuron

 

The Rise of the Heretical Leader — from ditchthattextbook.com; a guest post by Dan Fitzpatrick

Now is the time for visionary leadership in education. The era of artificial intelligence is reshaping the demands on education systems. Rigid policies, outdated curricula, and reliance on obsolete metrics are failing students. A recent survey from Resume Genius found that graduates lack skills in communication, collaboration, and critical thinking. Consequently, there is a growing trend in companies hiring candidates based on skills instead of traditional education or work experience. This underscores the urgent need for educational leaders to prioritize adaptability and innovation in their systems. Educational leaders must embrace a transformative approach to keep pace.

[Heretical leaders] bring courage, empathy, and strategic thinking to reimagine education’s potential. Here are their defining characteristics:

  • Visionary Thinking: They identify bold, innovative paths to progress.
  • Courage to Act: These leaders take calculated risks to overcome resistance and inertia.
  • Relentless Curiosity: They challenge assumptions and seek better alternatives.
  • Empathy for Stakeholders: Understanding the personal impact of change allows them to lead with compassion.
  • Strategic Disruption: Their deliberate actions ensure systemic improvements.
    These qualities enable Heretical leaders to reframe challenges as opportunities and drive meaningful change.

From DSC:
Readers of this blog will recognize that I believe visionary leadership is extremely important — in all areas of our society, but especially within our learning ecosystems. Vision trumps data, at least in my mind. There are times when data can be used to support a vision, but having a powerful vision is more lasting and impactful than relying on data to drive the organization.

So while I’d vote for a different term other than “heretical leaders,” I get what Dan is saying and I agree with him. Such leaders are going against the grain. They are swimming upstream. They are espousing perspectives that others often don’t buy into (at least initially or for some time). 

Such were the leaders who introduced online learning into the K-16 educational systems back in the late ’90s and into the next two+ decades. The growth of online-based learning continues and has helped educate millions of people. Those leaders and the people who worked for such endeavors were going against the grain.

We haven’t seen the end point of online-based learning. I think it will become even more powerful and impactful when AI is used to determine which jobs are opening up, and which skills are needed for those jobs, and then provide a listing of sources of where one can obtain that knowledge and develop those skills. People will be key in this vision. But so will AI and personalized learning. It will be a collaborative effort.

By the way, I am NOT advocating for using AI to outsource our thinking. Also, having basic facts and background knowledge in a domain is critically important, especially to use AI effectively. But we should be teaching students about AI (as we learn more about it ourselves). We should be working collaboratively with our students to understand how best to use AI. It’s their futures at stake.


 

NVIDIA Partners With Industry Leaders to Advance Genomics, Drug Discovery and Healthcare — from nvidianews.nvidia.com
IQVIA, Illumina, Mayo Clinic and Arc Institute Harness NVIDIA AI and Accelerated Computing to Transform $10 Trillion Healthcare and Life Sciences Industry

J.P. Morgan Healthcare Conference—NVIDIA today announced new partnerships to transform the $10 trillion healthcare and life sciences industry by accelerating drug discovery, enhancing genomic research and pioneering advanced healthcare services with agentic and generative AI.

The convergence of AI, accelerated computing and biological data is turning healthcare into the largest technology industry. Healthcare leaders IQVIA, Illumina and Mayo Clinic, as well as Arc Institute, are using the latest NVIDIA technologies to develop solutions that will help advance human health.

These solutions include AI agents that can speed clinical trials by reducing administrative burden, AI models that learn from biology instruments to advance drug discovery and digital pathology, and physical AI robots for surgery, patient monitoring and operations. AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.


AI could transform health care, but will it live up to the hype? — from sciencenews.org by Meghan Rosen and Tina Hesman Saey
The technology has the potential to improve lives, but hurdles and questions remain

True progress in transforming health care will require solutions across the political, scientific and medical sectors. But new forms of artificial intelligence have the potential to help. Innovators are racing to deploy AI technologies to make health care more effective, equitable and humane.

AI could spot cancer early, design lifesaving drugs, assist doctors in surgery and even peer into people’s futures to predict and prevent disease. The potential to help people live longer, healthier lives is vast. But physicians and researchers must overcome a legion of challenges to harness AI’s potential.


HHS publishes AI Strategic Plan, with guidance for healthcare, public health, human services — from healthcareitnews.com by Mike Miliard
The framework explores ways to spur innovation and adoption, enable more trustworthy model development, promote access and foster AI-empowered healthcare workforces.

The U.S. Department of Health and Human Services has issued its HHS Artificial Intelligence Strategic Plan, which the agency says will “set in motion a coordinated public-private approach to improving the quality, safety, efficiency, accessibility, equitability and outcomes in health and human services through the innovative, safe, and responsible use of AI.”


How Journalism Will Adapt in the Age of AI — from bloomberg.com/ by John Micklethwait
The news business is facing its next enormous challenge. Here are eight reasons to be both optimistic and paranoid.

AI promises to get under the hood of our industry — to change the way we write and edit stories. It will challenge us, just like it is challenging other knowledge workers like lawyers, scriptwriters and accountants.

Most journalists love AI when it helps them uncover Iranian oil smuggling. Investigative journalism is not hard to sell to a newsroom. The second example is a little harder. Over the past month we have started testing AI-driven summaries for some longer stories on the Bloomberg Terminal.

The software reads the story and produces three bullet points. Customers like it — they can quickly see what any story is about. Journalists are more suspicious. Reporters worry that people will just read the summary rather than their story.

So, looking into our laboratory, what do I think will happen in the Age of AI? Here are eight predictions.


‘IT will become the HR of AI agents’, says Nvidia’s CEO: How should organisations respond? — from hrsea.economictimes.indiatimes.com by Vanshika Rastogi

Nvidia’s CEO, Jensen Huang’s recent statement “IT will become the HR of AI agents” continues to spark debate about IT’s evolving role in managing AI systems. As AI tools become integral, IT teams will take on tasks like training and optimising AI agents, blending technical and HR responsibilities. So, how should organisations respond to this transformation?

 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 
© 2025 | Daniel Christian