“Student Guide to AI”; “AI Isn’t Just Changing How We Work — It’s Changing How We Learn”; + other items re: AI in our LE’s

.Get the 2025 Student Guide to Artificial Intelligence — from studentguidetoai.org
This guide is made available under a Creative Commons license by Elon University and the American Association of Colleges and Universities (AAC&U).
.


AI Isn’t Just Changing How We Work — It’s Changing How We Learn — from entrepreneur.com by Aytekin Tank; edited by Kara McIntyre
AI agents are opening doors to education that just a few years ago would have been unthinkable. Here’s how.

Agentic AI is taking these already huge strides even further. Rather than simply asking a question and receiving an answer, an AI agent can assess your current level of understanding and tailor a reply to help you learn. They can also help you come up with a timetable and personalized lesson plan to make you feel as though you have a one-on-one instructor walking you through the process. If your goal is to learn to speak a new language, for example, an agent might map out a plan starting with basic vocabulary and pronunciation exercises, then progress to simple conversations, grammar rules and finally, real-world listening and speaking practice.

For instance, if you’re an entrepreneur looking to sharpen your leadership skills, an AI agent might suggest a mix of foundational books, insightful TED Talks and case studies on high-performing executives. If you’re aiming to master data analysis, it might point you toward hands-on coding exercises, interactive tutorials and real-world datasets to practice with.

The beauty of AI-driven learning is that it’s adaptive. As you gain proficiency, your AI coach can shift its recommendations, challenge you with new concepts and even simulate real-world scenarios to deepen your understanding.

Ironically, the very technology feared by workers can also be leveraged to help them. Rather than requiring expensive external training programs or lengthy in-person workshops, AI agents can deliver personalized, on-demand learning paths tailored to each employee’s role, skill level, and career aspirations. Given that 68% of employees find today’s workplace training to be overly “one-size-fits-all,” an AI-driven approach will not only cut costs and save time but will be more effective.


What’s the Future for AI-Free Spaces? — from higherai.substack.com by Jason Gulya
Please let me dream…

This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.

Can we sustain those AI-free moments for an hour? A class session? Longer?

Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.

However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.


Duolingo’s AI Revolution — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 148 AI-Generated Courses Tell Us About the Future of Instructional Design & Human Learning

Last week, Duolingo announced an unprecedented expansion: 148 new language courses created using generative AI, effectively doubling their content library in just one year. This represents a seismic shift in how learning content is created — a process that previously took the company 12 years for their first 100 courses.

As CEO Luis von Ahn stated in the announcement, “This is a great example of how generative AI can directly benefit our learners… allowing us to scale at unprecedented speed and quality.”

In this week’s blog, I’ll dissect exactly how Duolingo has reimagined instructional design through AI, what this means for the learner experience, and most importantly, what it tells us about the future of our profession.


Are Mixed Reality AI Agents the Future of Medical Education? — from ehealth.eletsonline.com

Medical education is experiencing a quiet revolution—one that’s not taking place in lecture theatres or textbooks, but with headsets and holograms. At the heart of this revolution are Mixed Reality (MR) AI Agents, a new generation of devices that combine the immersive depth of mixed reality with the flexibility of artificial intelligence. These technologies are not mere flashy gadgets; they’re revolutionising the way medical students interact with complicated content, rehearse clinical skills, and prepare for real-world situations. By combining digital simulations with the physical world, MR AI Agents are redefining what it means to learn medicine in the 21st century.




4 Reasons To Use Claude AI to Teach — from techlearning.com by Erik Ofgang
Features that make Claude AI appealing to educators include a focus on privacy and conversational style.

After experimenting using Claude AI on various teaching exercises, from generating quizzes to tutoring and offering writing suggestions, I found that it’s not perfect, but I think it behaves favorably compared to other AI tools in general, with an easy-to-use interface and some unique features that make it particularly suited for use in education.

 

GPT, Claude, Gemini, Grok… Wait, Which One Do I Use Again? — from thebrainyacts.beehiiv.com by Josh Kubicki
Brainyacts #263

So this edition is simple: a quick, practical guide to the major generative AI models available in 2025 so far. What they’re good at, what to use them for, and where they might fit into your legal work—from document summarization to client communication to research support.

From DSC:
This comprehensive, highly informational posting lists what the model is, its strengths, the best legal use cases for it, and responsible use tips as well.


What’s Happening in LegalTech Other than AI? — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell

Of course AI will continue to make waves, but what other important legal technologies do you need to be aware of in 2025? Dennis and Tom give an overview of legal tech tools—both new and old—you should be using for successful, modernized legal workflows in your practice. They recommend solutions for task management, collaboration, calendars, projects, legal research, and more.

Later, the guys answer a listener’s question about online prompt libraries. Are there reputable, useful prompts available freely on the internet? They discuss their suggestions for prompt resources and share why these libraries tend to quickly become outdated.


LawDroid Founder Tom Martin on Building, Teaching and Advising About AI for Legal — from lawnext.com by Bob Ambrogi and Tom Martin

If you follow legal tech at all, you would be justified in suspecting that Tom Martin has figured out how to use artificial intelligence to clone himself.

While running LawDroid, his legal tech company, the Vancouver-based Martin also still manages a law practice in California, oversees an annual legal tech awards program, teaches a law school course on generative AI, runs an annual AI conference, hosts a podcast, and recently launched a legal tech consultancy.

In January 2023, less than two months after ChatGPT first launched, Martin’s company was one of the first to launch a gen AI assistant specifically for lawyers, called LawDroid Copilot. He has since also launched LawDroid Builder, a no-code platform for creating custom AI agents.


Legal training in the age of AI: A leadership imperative — from thomsonreuters.com by The Hon. Maritza Dominguez Braswell  U.S. Magistrate Judge / District of Colorado

In a profession that’s actively contemplating its future in the face of AI, legal organization leaders who demonstrate a genuine desire to invest in the next generation of legal professionals will undoubtedly set themselves apart


Unlocking the power of AI: Opportunities and use cases for law firms — from todaysconveyancer.co.uk

Artificial intelligence (AI) is here. And it’s already reshaping the way law firms operate. Whether automating repetitive tasks, improving risk management, or boosting efficiency, AI presents a genuine opportunity for forward-thinking legal practices. But with new opportunities come new responsibilities. And as firms explore AI tools, it’s essential they consider how to govern them safely and ethically. That’s where an AI policy becomes indispensable.

So, what can AI actually do for your firm right now? Let’s take a closer look.

 

..which links to:

Duolingo will replace contract workers with AI — from theverge.com by Jay Peters
The company is going to be ‘AI-first,’ says its CEO.

Duolingo will “gradually stop using contractors to do work that AI can handle,” according to an all-hands email sent by cofounder and CEO Luis von Ahn announcing that the company will be “AI-first.” The email was posted on Duolingo’s LinkedIn account.

According to von Ahn, being “AI-first” means the company will “need to rethink much of how we work” and that “making minor tweaks to systems designed for humans won’t get us there.” As part of the shift, the company will roll out “a few constructive constraints,” including the changes to how it works with contractors, looking for AI use in hiring and in performance reviews, and that “headcount will only be given if a team cannot automate more of their work.”


Relevant links:

Something strange, and potentially alarming, is happening to the job market for young, educated workers.

According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.

What’s going on? I see three plausible explanations, and each might be a little bit true.


It’s Time To Get Concerned As More Companies Replace Workers With AI — from forbes.com by Jack Kelly

The new workplace trend is not employee friendly. Artificial intelligence and automation technologies are advancing at blazing speed. A growing number of companies are using AI to streamline operations, cut costs, and boost productivity. Consequently, human workers are facing facing layoffs, replaced by AI. Like it or not, companies need to make tough decisions, including layoffs to remain competitive.

Corporations including Klarna, UPS, Duolingo, Intuit and Cisco are replacing laid-off workers with AI and automation. While these technologies enhance productivity, they raise serious concerns about future job security. For many workers, there is a big concern over whether or not their jobs will be impacted.


The future of career navigation — from medium.com by Sami Tatar

  1. Career navigation market overview

Key takeaway:
Career navigation has remained largely unchanged for decades, relying on personal networks and static job boards. The advent of AI is changing this, offering personalised career pathways, better job matching, democratised job application support, democratised access to career advice/coaching, and tailored skill development to help you get to where you need to be. Hundreds of millions of people start new jobs every year, this transformation opens up a multi-billion dollar opportunity for innovation in the global career navigation market.

A.4 How will AI disrupt this segment?
Personalised recommendations: AI can consume a vast amount of information (skills, education, career history, even youtube history, and x/twitter feeds), standardise this data at scale, and then use data models to match candidate characteristics to relevant careers and jobs. In theory, solutions could then go layers deeper, helping you position yourself for those future roles. Currently based in Amsterdam, and working in Strategy at Uber and want to work in a Product role in the future? Here are X,Y,Z specific things YOU can do in your role today to align yourself perfectly. E.g. find opportunities to manage cross functional projects in your current remit, reach out to Joe Bloggs also at Uber in Amsterdam who did Strategy and moved to Product, etc.


Tales from the Front – What Teachers Are Telling Me at AI Workshops — from aliciabankhofer.substack.com by Alicia Bankhofer
Real conversations, real concerns: What teachers are saying about AI

“Do I really have to use AI?”

No matter the school, no matter the location, when I deliver an AI workshop to a group of teachers, there are always at least a few colleagues thinking (and sometimes voicing), “Do I really need to use AI?”

Nearly three years after ChatGPT 3.5 landed in our lives and disrupted workflows in ways we’re still unpacking, most schools are swiftly catching up. Training sessions, like the ones I lead, are springing up everywhere, with principals and administrators trying to answer the same questions: Which tools should we use? How do we use them responsibly? How do we design learning in this new landscape?

But here’s what surprises me most: despite all the advances in AI technology, the questions and concerns from teachers remain strikingly consistent.

In this article, I want to pull back the curtain on those conversations. These concerns aren’t signs of reluctance – they reflect sincere feelings. And they deserve thoughtful, honest answers.


Welcome To AI Agent World! (Everything you need to know about the AI Agent market.) — from joshbersin.com by Josh Bersin

This week, in advance of major announcements from us and other vendors, I give you a good overview of the AI Agent market, and discuss the new role of AI governance platforms, AI agent development tools, AI agent vendors, and how AI agents will actually manifest and redefine what we call an “application.”

I discuss ServiceNow, Microsoft, SAP, Workday, Paradox, Maki People, and other vendors. My goal today is to “demystify” this space and explain the market, the trends, and why and how your IT department is going to be building a lot of the agents you need. And prepare for our announcements next week!


DeepSeek Unveils Prover V2 — from theaivalley.com

DeepSeek has quietly launched Prover V2, an open-source model built to solve math problems using Lean 4 assistant, which ensures every step of a proof is rigorously verified.

What’s impressive about it?

  • Massive scale: Based on DeepSeek-V3 with 671B parameters using a mixture-of-experts (MoE) architecture, which activates only parts of the model at a time to reduce compute costs.
  • Theorem solving: Uses long context windows (32K+ tokens) to generate detailed, step-by-step formal proofs for a wide range of math problems — from basic algebra to advanced calculus theorems.
  • Research grade: Assists mathematicians in testing new theorems automatically and helps students understand formal logic by generating both Lean 4 code and readable explanations.
  • New benchmark: Introduces ProverBench, a new 325-question benchmark set featuring problems from recent AIME exams and curated academic sources to evaluate mathematical reasoning.

Artificial Intelligence: Lessons Learned from a Graduate-Level Final Exam — from er.educause.edu by Craig Westling and Manish K. Mishra

The need for deep student engagement became clear at Dartmouth Geisel School of Medicine when a potential academic-integrity issue revealed gaps in its initial approach to artificial intelligence use in the classroom, leading to significant revisions to ensure equitable learning and assessment.


Deep Research with AI: 9 Ways to Get Started — from wondertools.substack.com by Jeremy Caplan
Practical strategies for thorough, citation-rich AI research
.


From George Siemens “SAIL: Transmutation, Assessment, Robots e-newsletter on 5/2/25

All indications are that AI, even if it stops advancing, has the capacity to dramatically change knowledge work. Knowing things matters less than being able to navigate and make sense of complex environments. Put another way, sensemaking, meaningmaking, and wayfinding (with their yet to be defined subelements) will be the foundation for being knowledgeable going forward.

 That will require being able to personalize learning to each individual learner so that who they are (not what our content is) forms the pedagogical entry point to learning.(DSC: And I would add WHAT THEY WANT to ACHIEVE.)LLMs are particularly good and transmutation. Want to explain AI to a farmer? A sentence or two in a system prompt achieves that. Know that a learner has ADHD? A few small prompt changes and it’s reflected in the way the LLM engages with learning. Talk like a pirate. Speak in the language of Shakespeare. Language changes. All a matter of a small meta comment send to the LLM. I’m convinced that this capability to change, transmute, information will become a central part of how LLMS and AI are adopted in education.

Speaking of Duolingo– it took them 12 years to develop 100 courses. In the last year, they developed an additional 148. AI is an accelerant with an impact in education that is hard to overstate. “Instead of taking years to build a single course with humans the company now builds a base course and uses AI to quickly customize it for dozens of different languages.”


FutureHouse Platform: Superintelligent AI Agents for Scientific Discovery — from futurehouse.org by Michael Skarlinski, Tyler Nadolski, James Braza, Remo Storni, Mayk Caldas, Ludovico Mitchener, Michaela Hinks, Andrew White, &  Sam Rodriques

FutureHouse is launching our platform, bringing the first publicly available superintelligent scientific agents to scientists everywhere via a web interface and API. Try it out for free at https://platform.futurehouse.org.

 

Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously — from venturebeat.com by Michael Nuñez|

Uplimit unveiled a suite of AI-powered learning agents today designed to help companies rapidly upskill employees while dramatically reducing administrative burdens traditionally associated with corporate training.

The San Francisco-based company announced three sets of purpose-built AI agents that promise to change how enterprises approach learning and development: skill-building agents, program management agents, and teaching assistant agents. The technology aims to address the growing skills gap as AI advances faster than most workforces can adapt.

“There is an unprecedented need for continuous learning—at a scale and speed traditional systems were never built to handle,” said Julia Stiglitz, CEO and co-founder of Uplimit, in an interview with VentureBeat. “The companies best positioned to thrive aren’t choosing between AI and their people—they’re investing in both.”


Introducing Claude for Education — from anthropic.com

Today we’re launching Claude for Education, a specialized version of Claude tailored for higher education institutions. This initiative equips universities to develop and implement AI-enabled approaches across teaching, learning, and administration—ensuring educators and students play a key role in actively shaping AI’s role in society.

As part of announcing Claude for Education, we’re introducing:

  1. Learning mode: A new Claude experience that guides students’ reasoning process rather than providing answers, helping develop critical thinking skills
  2. University-wide Claude availability: Full campus access agreements with Northeastern University, London School of Economics and Political Science (LSE), and Champlain College, making Claude available to all students
  3. Academic partnerships: Joining Internet2 and working with Instructure to embed AI into teaching & learning with Canvas LMS
  4. Student programs: A new Claude Campus Ambassadors program along with an initiative offering API credits for student projects

A comment on this from The Rundown AI:

Why it matters: Education continues to grapple with AI, but Anthropic is flipping the script by making the tech a partner in developing critical thinking rather than an answer engine. While the controversy over its use likely isn’t going away, this generation of students will have access to the most personalized, high-quality learning tools ever.


Should College Graduates Be AI Literate? — from chronicle.com by Beth McMurtrie (behind a paywall)
More institutions are saying yes. Persuading professors is only the first barrier they face.

Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.


Showing Up for the Future: Why Educators Can’t Sit Out the AI Conversation — from marcwatkins.substack.com with a guest post from Lew Ludwig

The Risk of Disengagement
Let’s be honest: most of us aren’t jumping headfirst into AI. At many of our institutions, it’s not a gold rush—it’s a quiet standoff. But the group I worry most about isn’t the early adopters. It’s the faculty who’ve decided to opt out altogether.

That choice often comes from a place of care. Concerns about data privacy, climate impact, exploitative labor, and the ethics of using large language models are real—and important. But choosing not to engage at all, even on ethical grounds, doesn’t remove us from the system. It just removes our voices from the conversation.

And without those voices, we risk letting others—those with very different priorities—make the decisions that shape what AI looks like in our classrooms, on our campuses, and in our broader culture of learning.



Turbocharge Your Professional Development with AI — from learningguild.com by Dr. RK Prasad

You’ve just mastered a few new eLearning authoring tools, and now AI is knocking on the door, offering to do your job faster, smarter, and without needing coffee breaks. Should you be worried? Or excited?

If you’re a Learning and Development (L&D) professional today, AI is more than just a buzzword—it’s transforming the way we design, deliver, and measure corporate training. But here’s the good news: AI isn’t here to replace you. It’s here to make you better at what you do.

The challenge is to harness its potential to build digital-ready talent, not just within your organization but within yourself.

Let’s explore how AI is reshaping L&D strategies and how you can leverage it for professional development.


5 Recent AI Notables — from automatedteach.com by Graham Clay

1. OpenAI’s New Image Generator
What Happened: OpenAI integrated a much more powerful image generator directly into GPT-4o, making it the default image creator in ChatGPT. Unlike previous image models, this one excels at accurately rendering text in images, precise visualization of diagrams/charts, and multi-turn image refinement through conversation.

Why It’s Big: For educators, this represents a significant advancement in creating educational visuals, infographics, diagrams, and other instructional materials with unprecedented accuracy and control. It’s not perfect, but you can now quickly generate custom illustrations that accurately display mathematical equations, chemical formulas, or process workflows — previously a significant hurdle in digital content creation — without requiring graphic design expertise or expensive software. This capability dramatically reduces the time between conceptualizing a visual aid and implementing it in course materials.
.


The 4 AI modes that will supercharge your workflow — from aiwithallie.beehiiv.com by Allie K. Miller
The framework most people and companies won’t discover until 2026


 

8 Weeks Left to Prepare Students for the AI-Enhanced Workplace — from insidehighered.com by Ray Schroeder
We are down to the final weeks left to fully prepare students for entry into the AI-enhanced workplace. Are your students ready?

The urgent task facing those of us who teach and advise students, whether they be degree program or certificate seeking, is to ensure that they are prepared to enter (or re-enter) the workplace with skills and knowledge that are relevant to 2025 and beyond. One of the first skills to cultivate is an understanding of what kinds of services this emerging technology can provide to enhance the worker’s productivity and value to the institution or corporation.

Given that short period of time, coupled with the need to cover the scheduled information in the syllabus, I recommend that we consider merging AI use into authentic assignments and assessments, supplementary modules, and other resources to prepare for AI.


Learning Design in the Era of Agentic AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Aka, how to design online async learning experiences that learners can’t afford to delegate to AI agents

The point I put forward was that the problem is not AI’s ability to complete online async courses, but that online async courses courses deliver so little value to our learners that they delegate their completion to AI.

The harsh reality is that this is not an AI problem — it is a learning design problem.

However, this realisation presents us with an opportunity which we overall seem keen to embrace. Rather than seeking out ways to block AI agents, we seem largely to agree that we should use this as a moment to reimagine online async learning itself.



8 Schools Innovating With Google AI — Here’s What They’re Doing — from forbes.com by Dan Fitzpatrick

While fears of AI replacing educators swirl in the public consciousness, a cohort of pioneering institutions is demonstrating a far more nuanced reality. These eight universities and schools aren’t just experimenting with AI, they’re fundamentally reshaping their educational ecosystems. From personalized learning in K-12 to advanced research in higher education, these institutions are leveraging Google’s AI to empower students, enhance teaching, and streamline operations.


Essential AI tools for better work — from wondertools.substack.com by Jeremy Caplan
My favorite tactics for making the most of AI — a podcast conversation

AI tools I consistently rely on (areas covered mentioned below)

  • Research and analysis
  • Communication efficiency
  • Multimedia creation

AI tactics that work surprisingly well 

1. Reverse interviews
Instead of just querying AI, have it interview you. Get the AI to interview you, rather than interviewing it. Give it a little context and what you’re focusing on and what you’re interested in, and then you ask it to interview you to elicit your own insights.”

This approach helps extract knowledge from yourself, not just from the AI. Sometimes we need that guide to pull ideas out of ourselves.

 

From DSC:
Look out Google, Amazon, and others! Nvidia is putting the pedal to the metal in terms of being innovative and visionary! They are leaving the likes of Apple in the dust.

The top talent out there is likely to go to Nvidia for a while. Engineers, programmers/software architects, network architects, product designers, data specialists, AI researchers, developers of robotics and autonomous vehicles, R&D specialists, computer vision specialists, natural language processing experts, and many more types of positions will be flocking to Nvidia to work for a company that has already changed the world and will likely continue to do so for years to come. 



NVIDIA’s AI Superbowl — from theneurondaily.com by Noah and Grant
PLUS: Prompt tips to make AI writing more natural

That’s despite a flood of new announcements (here’s a 16 min video recap), which included:

  1. A new architecture for massive AI data centers (now called “AI factories”).
  2. A physics engine for robot training built with Disney and DeepMind.
  3. partnership with GM to develop next-gen vehicles, factories and robots.
  4. A new Blackwell chip with “Dynamo” software that makes AI reasoning 40x faster than previous generations.
  5. A new “Rubin” chip slated for 2026 and a “Feynman” chip set for 2028.

For enterprises, NVIDIA unveiled DGX Spark and DGX Station—Jensen’s vision of AI-era computing, bringing NVIDIA’s powerful Blackwell chip directly to your desk.


Nvidia Bets Big on Synthetic Data — from wired.com by Lauren Goode
Nvidia has acquired synthetic data startup Gretel to bolster the AI training data used by the chip maker’s customers and developers.


Nvidia, xAI to Join BlackRock and Microsoft’s $30 Billion AI Infrastructure Fund — from investopedia.com by Aaron McDade
Nvidia and xAI are joining BlackRock and Microsoft in an AI infrastructure group seeking $30 billion in funding. The group was first announced in September as BlackRock and Microsoft sought to fund new data centers to power AI products.



Nvidia CEO Jensen Huang says we’ll soon see 1 million GPU data centers visible from space — from finance.yahoo.com by Daniel Howley
Nvidia CEO Jensen Huang says the company is preparing for 1 million GPU data centers.


Nvidia stock stems losses as GTC leaves Wall Street analysts ‘comfortable with long term AI demand’ — from finance.yahoo.com by Laura Bratton
Nvidia stock reversed direction after a two-day slide that saw shares lose 5% as the AI chipmaker’s annual GTC event failed to excite investors amid a broader market downturn.


Microsoft, Google, and Oracle Deepen Nvidia Partnerships. This Stock Got the Biggest GTC Boost. — from barrons.com by Adam Clark and Elsa Ohlen


The 4 Big Surprises from Nvidia’s ‘Super Bowl of AI’ GTC Keynote — from barrons.com by Tae Kim; behind a paywall

AI Super Bowl. Hi everyone. This week, 20,000 engineers, scientists, industry executives, and yours truly descended upon San Jose, Calif. for Nvidia’s annual GTC developers’ conference, which has been dubbed the “Super Bowl of AI.”


 

You can now use Deep Research without $200 — from flexos.work


Accelerating scientific breakthroughs with an AI co-scientist — from research.google by Juraj Gottweis and Vivek Natarajan

We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.


Now decides next: Generating a new future — from Deloitte.com
Deloitte’s State of Generative AI in the Enterprise Quarter four report

There is a speed limit. GenAI technology continues to advance at incredible speed. However, most organizations are moving at the speed of organizations, not at the speed of technology. No matter how quickly the technology advances—or how hard the companies producing GenAI technology push—organizational change in an enterprise can only happen so fast.

Barriers are evolving. Significant barriers to scaling and value creation are still widespread across key areas. And, over the past year regulatory uncertainty and risk management have risen in organizations’ lists of concerns to address. Also, levels of trust in GenAI are still moderate for the majority of organizations. Even so, with increased customization and accuracy of models—combined with a focus on better governance— adoption of GenAI is becoming more established.

Some uses are outpacing others. Application of GenAI is further along in some business areas than in others in terms of integration, return on investment (ROI) and expectations. The IT function is most mature; cybersecurity, operations, marketing and customer service are also showing strong adoption and results. Organizations reporting higher ROI for their most scaled initiatives are broadly further along in their GenAI journeys.

 

Google Workspace enables the future of AI-powered work for every business  — from workspace.google.com

The following AI capabilities will start rolling out to Google Workspace Business customers today and to Enterprise customers later this month:

  • Get AI assistance in Gmail, Docs, Sheets, Meet, Chat, Vids, and more: Do your best work faster with AI embedded in the tools you use every day. Gemini streamlines your communications by helping you summarize, draft, and find information in your emails, chats, and files. It can be a thought partner and source of inspiration, helping you create professional documents, slides, spreadsheets, and videos from scratch. Gemini can even improve your meetings by taking notes, enhancing your audio and video, and catching you up on the conversation if you join late.
  • Chat with Gemini Advanced, Google’s next-gen AI: Kickstart learning, brainstorming, and planning with the Gemini app on your laptop or mobile device. Gemini Advanced can help you tackle complex projects including coding, research, and data analysis and lets you build Gems, your team of AI experts to help with repeatable or specialized tasks.
  • Unlock the power of NotebookLM PlusWe’re bringing the revolutionary AI research assistant to every employee, to help them make sense of complex topics. Upload sources to get instant insights and Audio Overviews, then share customized notebooks with the team to accelerate their learning and onboarding.

And per Evelyn from the Stay Ahead newsletter (at FlexOS)

Google’s Gemini AI is stepping up its game in Google Workspace, bringing powerful new capabilities to your favorite tools like Gmail, Docs, and Sheets:

  • AI-Powered Summaries: Get concise, AI-generated summaries of long emails and documents so you can focus on what matters most.
  • Smart Reply: Gemini now offers context-aware email replies that feel more natural and tailored to your style.
  • Slides and images generation: Gemini in Slides can help you generate new images, summarize your slides, write and rewrite content, and refer to existing Drive files and/or emails.
  • Automated Data Insights: In Google Sheets, Gemini helps create a task tracker, conference agenda, spot trends, suggest formulas, and even build charts with simple prompts.
  • Intelligent Drafting: Google Docs now gets a creativity boost, helping you draft reports, proposals, or blog posts with AI suggestions and outlines.
  • Meeting Assistance: Say goodbye to the awkward AI attendees to help you take notes, now Gemini can natively do that for you – no interruption, no avatar, and no extra attendee. Meet can now also automatically generate captions to lower the language barrier.

Eveyln (from FlexOS) also mentions that CoPilot is getting enhancements too:

Copilot is now included in Microsoft 365 Personal and Family — from microsoft.com

Per Evelyn:

It’s exactly what we predicted: stand-alone AI apps like note-takers and image generators have had their moment, but as the tech giants step in, they’re bringing these features directly into their ecosystems, making them harder to ignore.


Announcing The Stargate Project — from openai.com

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.


Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Adobe’s new AI tool can edit 10,000 images in one click — from theverge.com by  Jess Weatherbed
Firefly Bulk Create can automatically remove, replace, or extend image backgrounds in huge batches.

Adobe is launching new generative AI tools that can automate labor-intensive production tasks like editing large batches of images and translating video presentations. The most notable is “Firefly Bulk Create,” an app that allows users to quickly resize up to 10,000 images or replace all of their backgrounds in a single click instead of tediously editing each picture individually.

 

Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts

We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)

If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.


Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025

AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:

  1. Break your own AI policies before you implement them.
  2. Fund your failures.
  3. Resist the pilot program. …
  4. Host Anti-Tech Tech Talks
  5. …+ several more tips

While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.



Maria Anderson responded to Clay’s posting with this idea:

Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.



Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That’s where the new Grammarly feature comes in.

The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”


AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.

Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.


Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

The Best of AI 2024: Top Winners Across 9 Categories — from aiwithallie.beehiiv.com by Allie Miller
2025 will be our weirdest year in AI yet. Read this so you’re more prepared.


Top AI Tools of 2024 — from ai-supremacy.com by Michael Spencer (behind a paywall)
Which AI tools stood out for me in 2024? My list.

Memorable AI Tools of 2024
Catergories included:

  • Useful
  • Popular
  • Captures the zeighest of AI product innovation
  • Fun to try
  • Personally satisfying
  1. NotebookLM
  2. Perplexity
  3. Claude

New “best” AI tool? Really? — from theneurondaily.com by Noah and Grant
PLUS: A free workaround to the “best” new AI…

What is Google’s Deep Research tool, and is it really “the best” AI research tool out there?

Here’s how it works: Think of Deep Research as a research team that can simultaneously analyze 50+ websites, compile findings, and create comprehensive reports—complete with citations.

Unlike asking ChatGPT to research for you, Deep Research shows you its research plan before executing, letting you edit the approach to get exactly what you need.

It’s currently free for the first month (though it’ll eventually be $20/month) when bundled with Gemini Advanced. Then again, Perplexity is always free…just saying.

We couldn’t just take J-Cal’s word for it, so we rounded up some other takes:

Our take: We then compared Perplexity, ChatGPT Search, and Deep Research (which we’re calling DR, or “The Docta” for short) on robot capabilities from CES revealed:


An excerpt from today’s Morning Edition from Bloomberg

Global banks will cut as many as 200,000 jobs in the next three to five years—a net 3% of the workforce—as AI takes on more tasks, according to a Bloomberg Intelligence survey. Back, middle office and operations are most at risk. A reminder that Citi said last year that AI is likely to replace more jobs in banking than in any other sector. JPMorgan had a more optimistic view (from an employee perspective, at any rate), saying its AI rollout has augmented, not replaced, jobs so far.


 

 

How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein

Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.

Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.

Here are the five most popular stories that Education Week published in 2024 about AI in schools.


What’s next with AI in higher education? — from msn.com by Science X Staff

Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:

1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change


5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns

 


AI in 2024: Insights From our 5 Million Readers — from linkedin.com by Generative AI

Checking the Pulse: The Impact of AI on Everyday Lives
So, what exactly did our users have to say about how AI transformed their lives this year?
.

Top 2024 Developments in AI

  1. Video Generation…
  2. AI Employees…
  3. Open Source Advancements…

Getting ready for 2025: your AI team members (Gift lesson 3/3) — from flexos.com by Daan van Rossum

And that’s why today, I’ll tell you exactly which AI tools I’ve recommended for the top 5 use cases to almost 200 business leaders who took the Lead with AI course.

1. Email Management: Simplifying Communication with AI

  • Microsoft Copilot for Outlook. …
  • Gemini AI for Gmail. …
  • Grammarly. …

2. Meeting Management: Maximize Your Time

  • Otter.ai. …
  • Copilot for Microsoft Teams. …
  • Other AI Meeting Assistants. Zoom AI Companion, Granola, and Fathom

3. Research: Streamlining Information Gathering

  • ChatGPT. …
  • Perplexity. …
  • Consensus. …

…plus several more items and tools that were mentioned by Daan.

 

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 
© 2025 | Daniel Christian