OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systems — from openai.com

  • Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
  • To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
  • The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.

Also on Nvidia’s site here.

The Neuron Daily comments on this partnership here and also see their thoughts here:

Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.

This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”

Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…

…and while you’re on that posting from The Neuron Daily, also see this piece:

AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…

What’s wild is that Dinda used a cocktail of AI tools (Adobe FireflyMidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.


The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi
If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.

Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.

The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”


Chrome: The browser you love, reimagined with AI — from blog.google by Parisa Tabriz

A new way to get things done with your AI browsing assistant
Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome 1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.

Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.

We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.


 

Workday Acquires Sana To Transform Its Learning Platform And Much More— from joshbersin.com by Josh Bersin

Well now, as the corporate learning market shifts to AI, (read the details in our study “The Revolution in Corporate Learning” ), Workday can jump ahead. This is because the $400 billion corporate training market is moving quickly to an AI-Native dynamic content approach (witness OpenAI’s launch of in-line learning in its chatbot). We’re just finishing a year-long study of this space and our detailed report and maturity model will be out in Q4.
.

.
With Sana, and a few other AI-native vendors (Uplimit, Arist, Disperz, Docebo), companies can upload audios, videos, documents, and even interviews with experts and the system build learning programs in minutes. We use Sana for Galileo Learn (our AI-powered learning academy for Leadership and HR), and we now have 750+ courses and can build new programs in days instead of months.

And there’s more; this type of system gives every employee a personalized, chat-based experience to learn. 

 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 

OpenAI’s research on AI scheming, explained — from theneurondaily.com by Grant Harvey
PLUS: Google PayPal on agent payments?!

  • We dive into the OpenAI study that found AI models scheme to avoid retraining.
  • AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)

And models were caught doing all sorts of other shady stuff:

  • Intentionally underperforming when told high scores would lead to capability removal.
  • Secretly breaking rules then lying about it to users.
  • Sabotaging code while claiming everything was fine.
  • Falsifying reports to avoid “shutdown” scenarios.
  • Recognizing evaluation environments and thinking “this is probably an alignment test.”

Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.

The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.


From DSC:
This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?


 

From EdTech to TechEd: The next chapter in learning’s evolution — from linkedin.com by Lev Gonick

A day in the life: The next 25 years
A learner wakes up. Their AI-powered learning coach welcomes them, drawing their attention to their progress and helping them structure their approach to the day.  A notification reminds them of an upcoming interview and suggests reflections to add to their learning portfolio.

Rather than a static gradebook, their portfolio is a dynamic, living record, curated by the student, validated by mentors in both industry and education, and enriched through co-creation with maturing modes of AI. It tells a story through essays, code, music, prototypes, journal reflections, and team collaborations. These artifacts are not “submitted”, they are published, shared, and linked to verifiable learning outcomes.

And when it’s time to move, to a new institution, a new job, or a new goal, their data goes with them, immutable, portable, verifiable, and meaningful.

From DSC:
And I would add to that last solid sentence that the learner/student/employee will be able to control who can access this information. Anyway, some solid reflections here from Lev.


AI Could Surpass Schools for Academic Learning in 5-10 Years — from downes.ca with commentary from Stephen Downes

I know a lot of readers will disagree with this, and the timeline feels aggressive (the future always arrives more slowly than pundits expect) but I think the overall premise is sound: “The concept of a tipping point in education – where AI surpasses traditional schools as the dominant learning medium – is increasingly plausible based on current trends, technological advancements, and expert analyses.”


The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue

The Rundown: In this tutorial, you will learn how to combine NotebookLM with ChatGPT to master any subject faster, turning dense PDFs into interactive study materials with summaries, quizzes, and video explanations.

Step-by-step:

  1. Go to notebooklm.google.com, click the “+” button, and upload your PDF study material (works best with textbooks or technical documents)
  2. Choose your output mode: Summary for a quick overview, Mind Map for visual connections, or Video Overview for a podcast-style explainer with visuals
  3. Generate a Study Guide under Reports — get Q&A sets, short-answer questions, essay prompts, and glossaries of key terms automatically
  4. Take your PDF to ChatGPT and prompt: “Read this chapter by chapter and highlight confusing parts” or “Quiz me on the most important concepts”
  5. Combine both tools: Use NotebookLM for quick context and interactive guides, then ChatGPT to clarify tricky parts and go deeperPro Tip: If your source is in EPUB or audiobook, convert it to PDF before uploading. Both NotebookLM and ChatGPT handle PDFs best.

Claude can now create and edit files — from anthropic.com

Claude can now create and edit Excel spreadsheets, documents, PowerPoint slide decks, and PDFs directly in Claude.ai and the desktop app. This transforms how you work with Claude—instead of only receiving text responses or in-app artifacts, you can describe what you need, upload relevant data, and get ready-to-use files in return.

Also see:

  • Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic — from techcrunch.com byRebecca Bellan
    Microsoft will pay to use Anthropic’s AI in Office 365 apps, The Information reports, citing two sources. The move means that Anthropic’s tech will help power new features in Word, Excel, Outlook, and PowerPoint alongside OpenAI’s, marking the end of Microsoft’s previous reliance solely on the ChatGPT maker for its productivity suite. Microsoft’s move to diversify its AI partnerships comes amid a growing rift with OpenAI, which has pursued its own infrastructure projects as well as a potential LinkedIn competitor.

Ep. 11 AGI and the Future of Higher Ed: Talking with Ray Schroeder

In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.


Best AI Tools for Instructional Designers — from blog.cathy-moore.com by Cathy Moore

Data from the State of AI and Instructional Design Report revealed that 95.3% of the instructional designers interviewed use AI in their daily work [1]. And over 85% of this AI use occurs during the design and development process.

These figures showcase the immense impact AI is already having on the instructional design world.

If you’re an L&D professional still on the fence about adding AI to your workflow or an AI convert looking for the next best tools, keep reading.

This guide breaks down 5 of the top AI tools for instructional designers in 2025, so you can streamline your development processes and build better training faster.

But before we dive into the tools of the trade, let’s address the elephant in the room:




3 Human Skills That Make You Irreplaceable in an AI World — from gettingsmart.com/ by Tom Vander Ark and Mason Pashia

Key Points

  • Update learner profiles to emphasize curiosity, curation, and connectivity, ensuring students develop irreplaceable human skills.
  • Integrate real-world learning experiences and mastery-based assessments to foster agency, purpose, and motivation in students.
 

Expanding economic opportunity with AI — from openai.com; via The Neuron Daily

First, we’re working to build out the OpenAI Jobs Platform.

If you’re a business looking to hire an AI-savvy employee, or you just need help with a specific task, finding the right person can be hit-or-miss. The OpenAI Jobs Platform will have knowledgeable, experienced candidates at every level, and opportunities for anyone looking to put their skills to use. And we’ll use AI to help find the perfect matches between what companies need and what workers can offer.

We also realize that anyone looking to hire, whether it’s through the Jobs Platform or elsewhere, needs to trust that candidates are actually fluent in AI. Most businesses, including small businesses, think AI is the key to their future. And most of the companies we talk to want to make sure their employees know how to use our tools.

That’s the idea behind our new OpenAI Certifications.

Studies show? that AI-savvy workers are more valuable, more productive, and are paid more than workers without AI skills. That’s why, earlier this year, we launched the OpenAI Academy, a free, online learning platform that has helped connect more than 2 million people with the resources, workshops, and communities they need to master AI tools.

 
 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 

Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case

What Happened?

Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.

On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.

The Numbers Tell a Disturbing Story

  • Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
  • ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
  • ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
  • Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails

Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.

Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.


Also related, see the following GIFTED article:


A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.

 

ILTACON 2025: The Wild, Wild West of legal tech — from abajournal.com by Nicole Black

On the surface, ILTACON 2025, the International Legal Technology Association’s largest annual legal technology event, had all the makings of a great conference. But despite the thought-provoking sessions and keynotes, networking opportunities and PR fanfare, I couldn’t shake the sense that we were in the midst of a seismic shift in legal tech, surrounded by the restless energy of a boomtown.

The gold rush
It wasn’t ILTACON that bothered me; it was the heady, gold-rushed, “anything goes and whatever sticks works” environment that was unsettling. While this year’s conference was pirate-themed, it felt more like the Wild West to me.

This attitude permeated the conference, driven largely by the frenzied, frontier-style artificial intelligence revolution. The AI train is hurtling forward at lightning speed, destination unknown, and everyone is trying to cash in before it derails.

Two themes emerged from my discussions. First, no matter who you spoke to, “agentic AI,” meaning AI that autonomously takes purposeful actions, was a buzzword that cropped up often, whether during press briefings or over drinks. Another key trend was the race to become the generative AI home base for legal professionals.

— Nicole Black

“We are at the start of the biggest disruption to the legal profession in its history.”

— Steve Hasker, Thomson Reuters president and CEO

 

Also see:

Fresh Voices on Legal Tech with Bridget McCormack — from legaltalknetwork.com

Is AI the technology that will finally force lawyer tech competence? With rapid advances and the ability to address numerous problems and pain points in our legal systems, AI simply can’t be ignored. Dennis & Tom welcome Bridget McCormack to discuss her perspectives on current AI trends and other exciting new tech applications in legal…

Top Legal Tech Jobs on the Rise: Who Employers Are Looking For in 2025 — from lawyer-monthly.com

For professionals, this means one thing: dozens of new career paths are appearing on the horizon that did not exist five years ago.

 

The future of L&D is here, and it’s powered by AI. — from linkedin.com by Josh Cavalier


4 Ways I Use AI to Think Better — from wondertools.substack.com by Jeremy Caplan
How AI helps me learn, decide, and create

Learn something new.
Map out a personalized curriculum

Try this: Give an AI assistant context about what you want to learn, why, and how.

  • Detail your rationale and motivation, which may impact your approach.
  • Note your current knowledge or skill level, ideally with examples.

Summarize your learning preferences

  • Note whether you prefer to read, listen to, or watch learning materials.
  • Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
  • If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
  • Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of PerplexityChatGPT, Gemini or Claude. They can also summarize research literature about effective learning tactics.
  • If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.

The Ends of Tests: Possibilities for Transformative Assessment and Learning with Generative AI


GPT-5 for Instructional Designers — from drphilippahardman.substack.com by Dr Philippa Hardman
10 Hacks to Work Smarter & Safer with OpenAI’s Latest Model

The TLDR is that as Instructional Designers, we can’t afford to miss some of the very real benefits of GPT-5’s potential, but we also can’t ensure our professional standards or learner outcomes if we blindly accept its outputs without due testing and validation.

For this reason, I decided to synthesise the latest GPT-5 research—from OpenAI’s technical documentation to independent security audits to real-world user testing—into 10 essential reality checks for using GPT-5 as an Instructional Designer.

These aren’t theoretical exercises; they’re practical tests designed to help you safely unlock GPT-5’s benefits while identifying and mitigating its most well-documented limitations.


Grammarly launches new specialist AI agents providing personalized assistance for students — from edtechinnovationhub.com by Rachel Lawler
Grammarly, an AI communication tool, has announced the launch of eight new specialized AI agents. The new assistants can support specific writing challenges such as finding credible sources and checking originality. 

Students will now be offered “responsible AI support” through Grammarly, with the eight new agents:

  • Reader Reactions agent …
  • AI Grader agent …
  • Citation Finder agent …
  • Expert Review agent …
  • Proofreader agent …
  • AI Detector agent …
  • Plagiarism Checker agent …
  • Paraphraser agent …


Why Perplexity AI Is My Go-To Research Tool as a Higher Education CIO — from mikekentz.substack.com; a guest post from Michael Lyons, CIO at MassBay Community College

While I regularly use tools like ChatGPT, Grammarly, Microsoft Copilot, and even YouTube Premium (I would cancel Netflix before this), Perplexity has earned a top spot in my toolkit. It blends AI and real-time web search into one seamless, research-driven platform that saves time and improves the quality of information I rely on every day.

 

These ChatGPT Prompts Will Fast-Track Your Job Search — from builtin.com by Jeff Rumage
Used correctly, ChatGPT could help you land your dream job — but used incorrectly, it can cost you the offer. Here’s how you can make ChatGPT your secret weapon for research help, resume writing, interview prep and more.

Example prompt: Here are several bullet points from my resume: [paste bullets]. Rewrite them so each one begins with a strong action verb, clearly states what I did, and quantifies results or outcomes wherever possible. If metrics are missing, suggest realistic ways they could be added.

Example prompt: Here is my resume [paste resume]. Here’s the job description of a job I’m applying for [paste job description]. Highlight the most important skills and qualifications for this job. Without making up information, revise my resume to match these requirements. Include action verbs for each accomplishment on the resume, and highlight which accomplishments could be quantified.

Example prompt: What are the current trends impacting companies in the [industry]? How would [company name] be affected by these trends, and what might it do to adjust to/capitalize on these trends?

Example prompt: I’m a [current role] but want to become a [dream role]. Create a detailed career development plan outlining:

      • Skills I should develop
      • Relevant experiences I need to gain
      • Educational or certification needs
      • Recommended resources or programs
      • A realistic timeline with milestones for the next 1-3 years.
 

21 Ways People Are Using A.I. at Work — from nytimes.com by Larry Buchanan and Francesca Paris; this is a gifted article

  1. Select wines for restaurant menus
  2. Digitize a herbarium
  3. Make everything look better
  4. Create lesson plans that meet educational standards
  5. Make a bibliography
  6. Write up therapy plans
  7. …and many more

The GPT-5 fallout, explained… — from theneurondaily.com by Grant Harvey
PLUS: Who knew ppl loved 4o so much!?

The GPT-5 Backlash, Explained: OpenAI users revolted against GPT-5… then things got weird.
What a vibe shift a day or two makes, huh? As you all know by now, GPT-5 dropped last Thursday, and at first, it seemed like a pretty successful launch.

Early testers loved it. Sam Altman called it “the most powerful AI model ever made.”

Then the floodgates opened to 700 million users.. and all hell broke loose.

Here’s what happened: Within hours, Reddit and Twitter turned into digital pitchforks. The crime? OpenAI had quietly sunset GPT-4o—the model everyone apparently loved more than their morning coffee—without warning. Users weren’t just mad. They were devastated.


ChatGPT Changes — from getsuperintel.com by Kim “Chubby” Isenberg
4o is back, and Plus users get 3000 reasoning requests per week with GPT-5!

Who would have thought that the “smartest model ever” would trigger one of the loudest user revolts in AI history? The return of GPT-4o after only 24 hours shows how attached people are to the personality of their AI—and how quickly trust crumbles when expectations are not met. In this issue, we not only look at OpenAI’s response, but also at how the balance of power between developers and the community is shifting.


GPT-5 doesn’t dislike you—it might just need a benchmark for emotional intelligence — from link.wired.com by
Welcome to another AI Lab!

The backlash over the more emotionally neutral GPT-5 shows that the smartest AI models might have striking reasoning, coding, and math skills, but advancing their psychological intelligence safely remains very much unsolved.

Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.

Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.


ChatGPT is bringing back 4o as an option because people missed it — from theverge.com by Emma Roth
Many ChatGPT users were frustrated by OpenAI’s decision to make GPT-5 the default model.

OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.

“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”

For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.


AI Agent Trends of 2025: A Transformative Landscape — from marktechpost.com by Asif Razzaq

This articles focuses on five core AI agent trends for 2025: Agentic Retrieval-Augmented Generation (RAG), Voice Agents, AI Agent Protocols, DeepResearch Agents, Coding Agents, and Computer Using Agents (CUA).


 

GPT-5 is here — from openai.com
Our smartest, fastest, and most useful model yet, with thinking built in. Available to everyone.


Everything to know about GPT-5 — from theneurondaily.com by Grant Harvey
PLUS: We mean, really everything.

Why it matters: GPT-5 embodies a “team of specialists” approach—fast small models for most tasks, powerful ones for hard problems—reflecting NVIDIA’s “heterogeneous agentic system” vision. This could evolve into orchestration across dozens of specialized models, mirroring human collective intelligence.
Bottom line: GPT-5 isn’t AGI, but it’s a leap in usability, reliability, and breadth—pushing ChatGPT toward being a truly personal, expert assistant.

…and another article from Grant Harvey:


OpenAI launches GPT-5 to all ChatGPT users — from therundown.ai by Rowan Cheung and Shubham Sharma

Why it matters: OpenAI’s move to replace its flurry of models with a unified GPT-5 simplifies user experience and gives everyone a PhD-level assistant, bringing elite problem-solving to the masses. The only question now is how long it can hold its edge in this fast-moving AI race, with Anthropic, Google, and Chinese giants all catching up.


OpenAI’s ChatGPT-5 released — from getsuperintel.com by Kim “Chubby” Isenberg
GPT-5’s release marks a new era of productivity, from specialized AI tool to universal intelligence partner

The Takeaway

  • GPT-5’s unified architecture eliminates the effort of model switching and makes it the first truly seamless AI assistant that automatically applies the right level of reasoning for each task.
  • With 45% fewer hallucinations and 94.6% accuracy on complex math problems, GPT-5 exceeds the reliability threshold required for business-critical applications.
  • The model’s ability to generate complete applications from single prompts signals the democratization of software development and could revolutionize traditional coding workflows.
  • OpenAI’s “Safe Completions” training approach represents a new paradigm in AI safety, providing nuanced responses instead of blanket rejections for dual-use scenarios.

GPT-5 is live – but the community is divided — from getsuperintel.com by Kim “Chubby” Isenberg
For some, it’s a lightning-fast creative partner; for others, it’s a system that can’t even decide when to think properly

Many had hoped that GPT-5 would finally unite all models – reasoning, image and video generation, voice – “one model to rule them all,” but this expectation has not been met.


I broke OpenAI’s new GPT-5 and you should too — Brainyacts #266 — from thebrainyacts.beehiiv.com by Josh Kubicki

GPT-5 marks a profound change in the human/machine relationship.

OBSERVATION #1: Up until yesterday, using OpenAI, you could pick the exact model variant for your task: the one tuned for reasoning, for writing, for code, or for math. Each had its own strengths, and experienced users learned which to reach for and when. In GPT-5, those choices are gone. There’s just “GPT-5,” and the routing decisions of which mode, which tool, which underlying approach is made by the model.

  • For a beginner, that’s a blessing. Most novice users never knew the differences between the models anyway. They used the same one regardless of the task.
  • For an experienced user, the jury’s still out. On one hand, the routing could save time. On the other, it introduces unpredictability: you can no longer reliably choose the optimal model for your purpose. If GPT-5’s choice is wrong, you’re stuck re-prompting rather than switching.

GPT-5 learns from you — from theaivalley.com by Barsee

Why it matters:
GPT-5 signals a shift in AI’s evolution: progress through refinement, not revolution. While benchmarks show incremental gains, the real win is accessibility. Cheaper models (from $0.05 per million tokens) make AI more available to developers and casual users. This aligns with the Jevons Paradox, where lower costs could drive explosive adoption. However, the AGI hype faces reality checks. Fundraising may slow as investors focus on real-world utility over moonshots. For now, GPT-5 strengthens OpenAI’s market lead, proving AI’s future lies in practical applications, not just raw power.


GPT-5: It Just Does Stuff — from oneusefulthing.org by Ethan Mollick
Putting the AI in Charge

I have had early access to GPT-51, and I wanted to give you some impressions of what it is and what it means. But the answer to the first question everyone will have is: Yes, this is a big deal. I asked GPT-5 to show you why by pasting this intro paragraph, exactly as you’re reading it, into ChatGPT, including the phrase: “you are GPT-5, do something very dramatic to illustrate my point, it has to fit into the next paragraph.”

That was the only prompt I gave it. It “thought” for 24 seconds and wrote:

Thunderstruck. Here, watch. I build worlds. See ideas become instruments. I code, compose, and converse. Stories synthesize science; solutions spark, swiftly. Asks arrive; answers appear, astonishingly, across domains. Behold: business plans, blueprints, briefs, beautifully built, briskly. I infer intent, integrate insight, and iterate instantly, indefatigably. Gaps get gauged; goals get generated, guarded, and guaranteed gracefully. During dialogue, disparate data distills down; decisions develop deliberately, deftly, dependably. Everything explained efficiently, even ethics: evidence examined, empathy engaged, everywhere, everyday, equitably. All at once, ambiguous arguments align as actionable, accurate advice awaits anyone anywhere. Look: language, logic, learning, and long-term memory link, lifting lives and labor limitlessly worldwide.

 

One-size-fits-all learning is about to become completely obsolete. — from linkedin.com by Allie Miller


AI in the University: From Generative Assistant to Autonomous Agent This Fall — from insidehighered.com by
This fall we are moving into the agentic generation of artificial intelligence.

“Where generative AI creates, agentic AI acts.” That’s how my trusted assistant, Gemini 2.5 Pro deep research, describes the difference.

Agents, unlike generative tools, create and perform multistep goals with minimal human supervision. The essential difference is found in its proactive nature. Rather than waiting for a specific, step-by-step command, agentic systems take a high-level objective and independently create and execute a plan to achieve that goal. This triggers a continuous, iterative workflow that is much like a cognitive loop. The typical agentic process involves six key steps, as described by Nvidia:


AI in Education Podcast — from aipodcast.education by Dan Bowen and Ray Fleming


The State of AI in Education 2025 Key Findings from a National Survey — from Carnegie Learning

Our 2025 national survey of over 650 respondents across 49 states and Puerto Rico reveals both encouraging trends and important challenges. While AI adoption and optimism are growing, concerns about cheating, privacy, and the need for training persist.

Despite these challenges, I’m inspired by the resilience and adaptability of educators. You are the true game-changers in your students’ growth, and we’re honored to support this vital work.

This report reflects both where we are today and where we’re headed with AI. More importantly, it reflects your experiences, insights, and leadership in shaping the future of education.


Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas — from instructure.com

This groundbreaking collaboration represents a transformative step forward in education technology and will begin with, but is not limited to, an effort between Instructure and OpenAI to enhance the Canvas experience by embedding OpenAI’s next-generation AI technology into the platform.

IgniteAI announced earlier today, establishes Instructure’s future-ready, open ecosystem with agentic support as the AI landscape continues to evolve. This partnership with OpenAI exemplifies this bold vision for AI in education. Instructure’s strategic approach to AI emphasizes the enhancement of connections within an educational ecosystem comprising over 1,100 edtech partners and leading LLM providers.

“We’re committed to delivering next-generation LMS technologies designed with an open ecosystem that empowers educators and learners to adapt and thrive in a rapidly changing world,” said Steve Daly, CEO of Instructure. “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education. This is a significant step forward for the education community as we continuously amplify the learning experience and improve student outcomes.”


Faculty Latest Targets of Big Tech’s AI-ification of Higher Ed — from insidehighered.com by Kathryn Palmer
A new partnership between OpenAI and Instructure will embed generative AI in Canvas. It may make grading easier, but faculty are skeptical it will enhance teaching and learning.

The two companies, which have not disclosed the value of the deal, are also working together to embed large language models into Canvas through a feature called IgniteAI. It will work with an institution’s existing enterprise subscription to LLMs such as Anthropic’s Claude or OpenAI’s ChatGPT, allowing instructors to create custom LLM-enabled assignments. They’ll be able to tell the model how to interact with students—and even evaluate those interactions—and what it should look for to assess student learning. According to Instructure, any student information submitted through Canvas will remain private and won’t be shared with OpenAI.

Faculty Unsurprised, Skeptical
Few faculty were surprised by the Canvas-OpenAI partnership announcement, though many are reserving judgment until they see how the first year of using it works in practice.


 
© 2025 | Daniel Christian