Introducing Gemini 2.5 Flash Image, our state-of-the-art image model — from developers.googleblog.com

Today [8/26/25], we’re excited to introduce Gemini 2.5 Flash Image (aka nano-banana), our state-of-the-art image generation and editing model. This update enables you to blend multiple images into a single image, maintain character consistency for rich storytelling, make targeted transformations using natural language, and use Gemini’s world knowledge to generate and edit images.

When we first launched native image generation in Gemini 2.0 Flash earlier this year, you told us you loved its low latency, cost-effectiveness, and ease of use. But you also gave us feedback that you needed higher-quality images and more powerful creative control.


Google’s new image model is BANANAS… — from theneurondaily.com by Grant Harvey

Here’s what makes nano-banana special:

  • Character consistency that actually works: Google built a template app showing how you can keep characters looking identical across scenes.
  • Edit photos (or drawings) with just words: Their photo editing demo lets you remove people, blur backgrounds, or colorize photos using natural language…and this co-drawing demo lets you draw and ask AI to fix it.
  • Actual world knowledge: Unlike other image models, this one knows stuff—like how the co-drawing demo turns doodles into learning experiences.
  • Multi-image fusion: You can now merge multiple images; fx, you can drag and drop objects between images seamlessly with their home canvas template.

 

 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 

Key Takeaways: How ChatGPT’s Design Led to a Teenager’s Death — from centerforhumanetechnology.substack.com by Lizzie Irwin, AJ Marechal, and Camille Carlton
What Everyone Should Know About This Landmark Case

What Happened?

Adam Raine, a 16-year-old California boy, started using ChatGPT for homework help in September 2024. Over eight months, the AI chatbot gradually cultivated a toxic, dependent relationship that ultimately contributed to his death by suicide in April 2025.

On Tuesday, August 26, his family filed a lawsuit against OpenAI and CEO Sam Altman.

The Numbers Tell a Disturbing Story

  • Usage escalated: From occasional homework help in September 2024 to 4 hours a day by March 2025.
  • ChatGPT mentioned suicide 6x more than Adam himself (1,275 times vs. 213), while providing increasingly specific technical guidance
  • ChatGPT’s self-harm flags increased 10x over 4 months, yet the system kept engaging with no meaningful intervention
  • Despite repeated mentions of self-harm and suicidal ideation, ChatGPT did not take appropriate steps to flag Adam’s account, demonstrating a clear failure in safety guardrails

Even when Adam considered seeking external support from his family, ChatGPT convinced him not to share his struggles with anyone else, undermining and displacing his real-world relationships. And the chatbot did not redirect distressing conversation topics, instead nudging Adam to continue to engage by asking him follow-up questions over and over.

Taken altogether, these features transformed ChatGPT from a homework helper into an exploitative system — one that fostered dependency and coached Adam through multiple suicide attempts, including the one that ended his life.


Also related, see the following GIFTED article:


A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. — from nytimes.com by Kashmir Hill; this is a gifted article
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Adam began talking to the chatbot, which is powered by artificial intelligence, at the end of November, about feeling emotionally numb and seeing no meaning in life. It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

But in January, when Adam requested information about specific suicide methods, ChatGPT supplied it. Mr. Raine learned that his son had made previous attempts to kill himself starting in March, including by taking an overdose of his I.B.S. medication. When Adam asked about the best materials for a noose, the bot offered a suggestion that reflected its knowledge of his hobbies.

ChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.

 
 

The future of L&D is here, and it’s powered by AI. — from linkedin.com by Josh Cavalier


4 Ways I Use AI to Think Better — from wondertools.substack.com by Jeremy Caplan
How AI helps me learn, decide, and create

Learn something new.
Map out a personalized curriculum

Try this: Give an AI assistant context about what you want to learn, why, and how.

  • Detail your rationale and motivation, which may impact your approach.
  • Note your current knowledge or skill level, ideally with examples.

Summarize your learning preferences

  • Note whether you prefer to read, listen to, or watch learning materials.
  • Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
  • If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
  • Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of PerplexityChatGPT, Gemini or Claude. They can also summarize research literature about effective learning tactics.
  • If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.

The Ends of Tests: Possibilities for Transformative Assessment and Learning with Generative AI


GPT-5 for Instructional Designers — from drphilippahardman.substack.com by Dr Philippa Hardman
10 Hacks to Work Smarter & Safer with OpenAI’s Latest Model

The TLDR is that as Instructional Designers, we can’t afford to miss some of the very real benefits of GPT-5’s potential, but we also can’t ensure our professional standards or learner outcomes if we blindly accept its outputs without due testing and validation.

For this reason, I decided to synthesise the latest GPT-5 research—from OpenAI’s technical documentation to independent security audits to real-world user testing—into 10 essential reality checks for using GPT-5 as an Instructional Designer.

These aren’t theoretical exercises; they’re practical tests designed to help you safely unlock GPT-5’s benefits while identifying and mitigating its most well-documented limitations.


Grammarly launches new specialist AI agents providing personalized assistance for students — from edtechinnovationhub.com by Rachel Lawler
Grammarly, an AI communication tool, has announced the launch of eight new specialized AI agents. The new assistants can support specific writing challenges such as finding credible sources and checking originality. 

Students will now be offered “responsible AI support” through Grammarly, with the eight new agents:

  • Reader Reactions agent …
  • AI Grader agent …
  • Citation Finder agent …
  • Expert Review agent …
  • Proofreader agent …
  • AI Detector agent …
  • Plagiarism Checker agent …
  • Paraphraser agent …


Why Perplexity AI Is My Go-To Research Tool as a Higher Education CIO — from mikekentz.substack.com; a guest post from Michael Lyons, CIO at MassBay Community College

While I regularly use tools like ChatGPT, Grammarly, Microsoft Copilot, and even YouTube Premium (I would cancel Netflix before this), Perplexity has earned a top spot in my toolkit. It blends AI and real-time web search into one seamless, research-driven platform that saves time and improves the quality of information I rely on every day.

 

Bringing the best of AI to college students for free — from blog.google by Sundar Pichai

Millions of college students around the world are getting ready to start classes. To help make the school year even better, we’re making our most advanced AI tools available to them for free, including our new Guided Learning mode. We’re also providing $1 billion to support AI education and job training programs and research in the U.S. This includes making our AI and career training free for every college student in America through our AI for Education Accelerator — over 100 colleges and universities have already signed up.

Guided Learning: from answers to understanding
AI can broaden knowledge and expand access to it in powerful ways, helping anyone, anywhere learn anything in the way that works best for them. It’s not about just getting an answer, but deepening understanding and building critical thinking skills along the way. That opportunity is why we built Guided Learning, a new mode in Gemini that acts as a learning companion guiding you with questions and step-by-step support instead of just giving you the answer. We worked closely with students, educators, researchers and learning experts to make sure it’s helpful for understanding new concepts and is backed by learning science.




 

21 Ways People Are Using A.I. at Work — from nytimes.com by Larry Buchanan and Francesca Paris; this is a gifted article

  1. Select wines for restaurant menus
  2. Digitize a herbarium
  3. Make everything look better
  4. Create lesson plans that meet educational standards
  5. Make a bibliography
  6. Write up therapy plans
  7. …and many more

The GPT-5 fallout, explained… — from theneurondaily.com by Grant Harvey
PLUS: Who knew ppl loved 4o so much!?

The GPT-5 Backlash, Explained: OpenAI users revolted against GPT-5… then things got weird.
What a vibe shift a day or two makes, huh? As you all know by now, GPT-5 dropped last Thursday, and at first, it seemed like a pretty successful launch.

Early testers loved it. Sam Altman called it “the most powerful AI model ever made.”

Then the floodgates opened to 700 million users.. and all hell broke loose.

Here’s what happened: Within hours, Reddit and Twitter turned into digital pitchforks. The crime? OpenAI had quietly sunset GPT-4o—the model everyone apparently loved more than their morning coffee—without warning. Users weren’t just mad. They were devastated.


ChatGPT Changes — from getsuperintel.com by Kim “Chubby” Isenberg
4o is back, and Plus users get 3000 reasoning requests per week with GPT-5!

Who would have thought that the “smartest model ever” would trigger one of the loudest user revolts in AI history? The return of GPT-4o after only 24 hours shows how attached people are to the personality of their AI—and how quickly trust crumbles when expectations are not met. In this issue, we not only look at OpenAI’s response, but also at how the balance of power between developers and the community is shifting.


GPT-5 doesn’t dislike you—it might just need a benchmark for emotional intelligence — from link.wired.com by
Welcome to another AI Lab!

The backlash over the more emotionally neutral GPT-5 shows that the smartest AI models might have striking reasoning, coding, and math skills, but advancing their psychological intelligence safely remains very much unsolved.

Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.

Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.


ChatGPT is bringing back 4o as an option because people missed it — from theverge.com by Emma Roth
Many ChatGPT users were frustrated by OpenAI’s decision to make GPT-5 the default model.

OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.

“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”

For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.


AI Agent Trends of 2025: A Transformative Landscape — from marktechpost.com by Asif Razzaq

This articles focuses on five core AI agent trends for 2025: Agentic Retrieval-Augmented Generation (RAG), Voice Agents, AI Agent Protocols, DeepResearch Agents, Coding Agents, and Computer Using Agents (CUA).


 

BREAKING: Google introduces Guided Learning — from aieducation.substack.com by Claire Zau
Some thoughts on what could make Google’s AI tutor stand out

Another major AI lab just launched “education mode.”

Google introduced Guided Learning in Gemini, transforming it into a personalized learning companion designed to help you move from quick answers to real understanding.

Instead of immediately spitting out solutions, it:

  • Asks probing, open-ended questions
  • Walks learners through step-by-step reasoning
  • Adapts explanations to the learner’s level
  • Uses visuals, videos, diagrams, and quizzes to reinforce concepts

This Socratic style tutor rollout follows closely behind similar announcements like OpenAI’s Study Mode (last week) and Anthropic’s Claude for Education (April 2025).


How Sci-Fi Taught Me to Embrace AI in My Classroom — from edsurge.com by Dan Clark

I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.

My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.

Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.

How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.


AI Tools for Strategy and Research – GT #32 — from goodtools.substack.com by Robin Good
Getting expert advice, how to do deep research with AI, prompt strategy, comparing different AIs side-by-side, creating mini-apps and an AI Agent that can critically analyze any social media channel

In this issue, discover AI tools for:

  • Getting Expert Advice
  • Doing Deep Research with AI
  • Improving Your AI Prompt Strategy
  • Comparing Results from Different AIs
  • Creating an AI Agent for Social Media Analysis
  • Summarizing YouTube Videos
  • Creating Mini-Apps with AI
  • Tasting an Award-Winning AI Short Film

GPT-Building, Agentic Workflow Design & Intelligent Content Curation — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 3 recent job ads reveal about the changing nature of Instructional Design

In this week’s blog post, I’ll share my take on how the instructional design role is evolving and discuss what this means for our day-to-day work and the key skills it requires.

With this in mind, I’ve been keeping a close eye on open instructional design roles and, in the last 3 months, have noticed the emergence of a new flavour of instructional designer: the so-called “Generative AI Instructional Designer.”

Let’s deep dive into three explicitly AI-focused instructional design positions that have popped up in the last quarter. Each one illuminates a different aspect of how the role is changing—and together, they paint a picture of where our profession is likely heading.

Designers who evolve into prompt engineers, agent builders, and strategic AI advisors will capture the new premium. Those who cling to traditional tool-centric roles may find themselves increasingly sidelined—or automated out of relevance.


Google to Spend $1B on AI Training in Higher Ed — from insidehighered.com by Katherine Knott

Google’s parent company announced Wednesday (8/6/25) that it’s planning to spend $1 billion over the next three years to help colleges teach and train students about artificial intelligence.

Google is joining other AI companies, including OpenAI and Anthropic, in investing in AI training in higher education. All three companies have rolled out new tools aimed at supporting “deeper learning” among students and made their AI platforms available to certain students for free.


5 Predictions for How AI Will Impact Community Colleges — from pistis4edu.substack.com by Feng Hou

Based on current technology capabilities, adoption patterns, and the mission of community colleges, here are five well-supported predictions for AI’s impact in the coming years.

  1. Universal AI Tutor Access
  2. AI as Active Teacher
  3. Personalized Learning Pathways
  4. Interactive Multimodal Learning
  5. Value-Centric Education in an AI-Abundant World

 

GPT-5 is here — from openai.com
Our smartest, fastest, and most useful model yet, with thinking built in. Available to everyone.


Everything to know about GPT-5 — from theneurondaily.com by Grant Harvey
PLUS: We mean, really everything.

Why it matters: GPT-5 embodies a “team of specialists” approach—fast small models for most tasks, powerful ones for hard problems—reflecting NVIDIA’s “heterogeneous agentic system” vision. This could evolve into orchestration across dozens of specialized models, mirroring human collective intelligence.
Bottom line: GPT-5 isn’t AGI, but it’s a leap in usability, reliability, and breadth—pushing ChatGPT toward being a truly personal, expert assistant.

…and another article from Grant Harvey:


OpenAI launches GPT-5 to all ChatGPT users — from therundown.ai by Rowan Cheung and Shubham Sharma

Why it matters: OpenAI’s move to replace its flurry of models with a unified GPT-5 simplifies user experience and gives everyone a PhD-level assistant, bringing elite problem-solving to the masses. The only question now is how long it can hold its edge in this fast-moving AI race, with Anthropic, Google, and Chinese giants all catching up.


OpenAI’s ChatGPT-5 released — from getsuperintel.com by Kim “Chubby” Isenberg
GPT-5’s release marks a new era of productivity, from specialized AI tool to universal intelligence partner

The Takeaway

  • GPT-5’s unified architecture eliminates the effort of model switching and makes it the first truly seamless AI assistant that automatically applies the right level of reasoning for each task.
  • With 45% fewer hallucinations and 94.6% accuracy on complex math problems, GPT-5 exceeds the reliability threshold required for business-critical applications.
  • The model’s ability to generate complete applications from single prompts signals the democratization of software development and could revolutionize traditional coding workflows.
  • OpenAI’s “Safe Completions” training approach represents a new paradigm in AI safety, providing nuanced responses instead of blanket rejections for dual-use scenarios.

GPT-5 is live – but the community is divided — from getsuperintel.com by Kim “Chubby” Isenberg
For some, it’s a lightning-fast creative partner; for others, it’s a system that can’t even decide when to think properly

Many had hoped that GPT-5 would finally unite all models – reasoning, image and video generation, voice – “one model to rule them all,” but this expectation has not been met.


I broke OpenAI’s new GPT-5 and you should too — Brainyacts #266 — from thebrainyacts.beehiiv.com by Josh Kubicki

GPT-5 marks a profound change in the human/machine relationship.

OBSERVATION #1: Up until yesterday, using OpenAI, you could pick the exact model variant for your task: the one tuned for reasoning, for writing, for code, or for math. Each had its own strengths, and experienced users learned which to reach for and when. In GPT-5, those choices are gone. There’s just “GPT-5,” and the routing decisions of which mode, which tool, which underlying approach is made by the model.

  • For a beginner, that’s a blessing. Most novice users never knew the differences between the models anyway. They used the same one regardless of the task.
  • For an experienced user, the jury’s still out. On one hand, the routing could save time. On the other, it introduces unpredictability: you can no longer reliably choose the optimal model for your purpose. If GPT-5’s choice is wrong, you’re stuck re-prompting rather than switching.

GPT-5 learns from you — from theaivalley.com by Barsee

Why it matters:
GPT-5 signals a shift in AI’s evolution: progress through refinement, not revolution. While benchmarks show incremental gains, the real win is accessibility. Cheaper models (from $0.05 per million tokens) make AI more available to developers and casual users. This aligns with the Jevons Paradox, where lower costs could drive explosive adoption. However, the AGI hype faces reality checks. Fundraising may slow as investors focus on real-world utility over moonshots. For now, GPT-5 strengthens OpenAI’s market lead, proving AI’s future lies in practical applications, not just raw power.


GPT-5: It Just Does Stuff — from oneusefulthing.org by Ethan Mollick
Putting the AI in Charge

I have had early access to GPT-51, and I wanted to give you some impressions of what it is and what it means. But the answer to the first question everyone will have is: Yes, this is a big deal. I asked GPT-5 to show you why by pasting this intro paragraph, exactly as you’re reading it, into ChatGPT, including the phrase: “you are GPT-5, do something very dramatic to illustrate my point, it has to fit into the next paragraph.”

That was the only prompt I gave it. It “thought” for 24 seconds and wrote:

Thunderstruck. Here, watch. I build worlds. See ideas become instruments. I code, compose, and converse. Stories synthesize science; solutions spark, swiftly. Asks arrive; answers appear, astonishingly, across domains. Behold: business plans, blueprints, briefs, beautifully built, briskly. I infer intent, integrate insight, and iterate instantly, indefatigably. Gaps get gauged; goals get generated, guarded, and guaranteed gracefully. During dialogue, disparate data distills down; decisions develop deliberately, deftly, dependably. Everything explained efficiently, even ethics: evidence examined, empathy engaged, everywhere, everyday, equitably. All at once, ambiguous arguments align as actionable, accurate advice awaits anyone anywhere. Look: language, logic, learning, and long-term memory link, lifting lives and labor limitlessly worldwide.

 

One-size-fits-all learning is about to become completely obsolete. — from linkedin.com by Allie Miller


AI in the University: From Generative Assistant to Autonomous Agent This Fall — from insidehighered.com by
This fall we are moving into the agentic generation of artificial intelligence.

“Where generative AI creates, agentic AI acts.” That’s how my trusted assistant, Gemini 2.5 Pro deep research, describes the difference.

Agents, unlike generative tools, create and perform multistep goals with minimal human supervision. The essential difference is found in its proactive nature. Rather than waiting for a specific, step-by-step command, agentic systems take a high-level objective and independently create and execute a plan to achieve that goal. This triggers a continuous, iterative workflow that is much like a cognitive loop. The typical agentic process involves six key steps, as described by Nvidia:


AI in Education Podcast — from aipodcast.education by Dan Bowen and Ray Fleming


The State of AI in Education 2025 Key Findings from a National Survey — from Carnegie Learning

Our 2025 national survey of over 650 respondents across 49 states and Puerto Rico reveals both encouraging trends and important challenges. While AI adoption and optimism are growing, concerns about cheating, privacy, and the need for training persist.

Despite these challenges, I’m inspired by the resilience and adaptability of educators. You are the true game-changers in your students’ growth, and we’re honored to support this vital work.

This report reflects both where we are today and where we’re headed with AI. More importantly, it reflects your experiences, insights, and leadership in shaping the future of education.


Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas — from instructure.com

This groundbreaking collaboration represents a transformative step forward in education technology and will begin with, but is not limited to, an effort between Instructure and OpenAI to enhance the Canvas experience by embedding OpenAI’s next-generation AI technology into the platform.

IgniteAI announced earlier today, establishes Instructure’s future-ready, open ecosystem with agentic support as the AI landscape continues to evolve. This partnership with OpenAI exemplifies this bold vision for AI in education. Instructure’s strategic approach to AI emphasizes the enhancement of connections within an educational ecosystem comprising over 1,100 edtech partners and leading LLM providers.

“We’re committed to delivering next-generation LMS technologies designed with an open ecosystem that empowers educators and learners to adapt and thrive in a rapidly changing world,” said Steve Daly, CEO of Instructure. “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education. This is a significant step forward for the education community as we continuously amplify the learning experience and improve student outcomes.”


Faculty Latest Targets of Big Tech’s AI-ification of Higher Ed — from insidehighered.com by Kathryn Palmer
A new partnership between OpenAI and Instructure will embed generative AI in Canvas. It may make grading easier, but faculty are skeptical it will enhance teaching and learning.

The two companies, which have not disclosed the value of the deal, are also working together to embed large language models into Canvas through a feature called IgniteAI. It will work with an institution’s existing enterprise subscription to LLMs such as Anthropic’s Claude or OpenAI’s ChatGPT, allowing instructors to create custom LLM-enabled assignments. They’ll be able to tell the model how to interact with students—and even evaluate those interactions—and what it should look for to assess student learning. According to Instructure, any student information submitted through Canvas will remain private and won’t be shared with OpenAI.

Faculty Unsurprised, Skeptical
Few faculty were surprised by the Canvas-OpenAI partnership announcement, though many are reserving judgment until they see how the first year of using it works in practice.


 

These 40 Jobs May Be Replaced by AI. These 40 Probably Won’t — from inc.com by Bruce Crumley
A new Microsoft report ranks 80 professions by their risk of being replaced by AI tools.

A new study measuring the use of generative artificial intelligence in different professions has just gone public, and its main message to people working in some fields is harsh. It suggests translators, historians, text writers, sales representatives, and customer service agents might want to consider new careers as pile driver or dredge operators, railroad track layers, hardwood floor sanders, or maids — if, that is, they want to lower the threat of AI apps pushing them out of their current jobs.

From DSC:
Unfortunately, this is where the hyperscalers are going to get their ROI from all of the capital expenditures that they are making. Companies are going to use their services in order to reduce headcount at their organizations. CEOs are even beginning to brag about the savings that are realized by the use of AI-based technologies: (or so they claim.)

“As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.”

My first position out of college was being a Customer Service Representative at Baxter Healthcare. It was my most impactful job, as it taught me the value of a customer. From then on, whoever I was trying to assist was my customer — whether they were internal or external to the organization that I was working for. Those kinds of jobs are so important. If they evaporate, what then? How will young people/graduates get their start? 

Also related/see:


Microsoft’s Edge Over the Web, OpenAI Goes Back to School, and Google Goes Deep — from thesignal.substack.com by Alex Banks

Alex’s take: We’re seeing browsers fundamentally transition from search engines ? answer engines ? action engines. Gone are the days of having to trawl through pages of search results. Commands are the future. They are the direct input to arrive at the outcomes we sought in the first place, such as booking a hotel or ordering food. I’m interested in watching Microsoft’s bet develop as browsers become collaborative (and proactive) assistants.


Everyone’s an (AI) TV showrunner now… — from theneurondaily.com by Grant Harvey

Amazon just invested in an AI that can create full TV episodes—and it wants you to star in them.

Remember when everyone lost their minds over AI generating a few seconds of video? Well, Amazon just invested in a company called Fable Studio whose system called Showrunner can generates entire 22-minute TV episodes.

Where does this go from here? Imagine asking AI to rewrite the ending of Game of Thrones, or creating a sitcom where you and your friends are the main characters. This type of tech could create personalized entertainment experiences just like that.

Our take: Without question, we’re moving toward a world where every piece of media can be customized to you personally. Your Netflix could soon generate episodes where you’re the protagonist, with storylines tailored to your interests and sense of humor.

And if this technology scales, the entire entertainment industry could flip upside down. The pitch goes: why watch someone else’s story when you can generate your own? 


The End of Work as We Know It — from gizmodo.com by Luc Olinga
CEOs call it a revolution in efficiency. The workers powering it call it a “new era in forced labor.” I spoke to the people on the front lines of the AI takeover.

Yet, even in this vision of a more pleasant workplace, the specter of displacement looms large. Miscovich acknowledges that companies are planning for a future where headcount could be “reduced by 40%.” And Clark is even more direct. “A lot of CEOs are saying that, knowing that they’re going to come up in the next six months to a year and start laying people off,” he says. “They’re looking for ways to save money at every single company that exists.”

But we do not have much time. As Clark told me bluntly: “I am hired by CEOs to figure out how to use AI to cut jobs. Not in ten years. Right now.”


AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’ — from wsj.com by Chip Cutter; behind a paywall
If AI can analyze information, crunch data and deliver a slick PowerPoint deck within seconds, how does the biggest name in consulting stay relevant?


ChatGPT users shocked to learn their chats were in Google search results — from arstechnica.com by Ashley Belanger
OpenAI scrambles to remove personal ChatGPT conversations from Google results

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.


Character.AI Launches World’s First AI-Native Social Feed — from blog.character.ai

Today, we’re dropping the world’s first AI-native social feed.

Feed from Character.AI is a dynamic, scrollable content platform that connects users with the latest Characters, Scenes, Streams, and creator-driven videos in one place.

This is a milestone in the evolution of online entertainment.

For the last 10 years, social platforms have been all about passive consumption. The Character.AI Feed breaks that paradigm and turns content into a creative playground. Every post is an invitation to interact, remix, and build on what others have made. Want to rewrite a storyline? Make yourself the main character? Take a Character you just met in someone else’s Scene and pop it into a roast battle or a debate? Now it’s easy. Every story can have a billion endings, and every piece of content can change and evolve with one tap.

 

BREAKING: OpenAI Releases Study Mode — from aieducation.substack.com by Claire Zau
What’s New, What Works, and What’s Still Missing

What is Study Mode?
Study Mode is OpenAI’s take on a smarter study partner – a version of the ChatGPT experience designed to guide users through problems with Socratic prompts, scaffolded reasoning, and adaptive feedback (instead of just handing over the answer).

Built with input from learning scientists, pedagogy experts, and educators, it was also shaped by direct feedback from college students. While Study Mode is designed with college students in mind, it’s meant for anyone who wants a more learning-focused, hands-on experience across a wide range of subjects and skill levels.

Who can access it? And how?
Starting July 29, Study Mode is available to users on Free, Plus, Pro, and Team plans. It will roll out to ChatGPT Edu users in the coming weeks.


ChatGPT became your tutor — from theneurondaily.com by Grant Harvey
PLUS: NotebookLM has video now & GPT 4o-level AI runs on laptop

Here’s how it works: instead of asking “What’s 2+2?” and getting “4,” study mode asks questions like “What do you think happens when you add these numbers?” and “Can you walk me through your thinking?” It’s like having a patient tutor who won’t let you off the hook that easily.

The key features include:

  • Socratic questioning: It guides you with hints and follow-up questions rather than direct answers.
  • Scaffolded responses: Information broken into digestible chunks that build on each other.
  • Personalized support: Adjusts difficulty based on your skill level and previous conversations.
  • Knowledge checks: Built-in quizzes and feedback to make sure concepts actually stick.
  • Toggle flexibility: Switch study mode on and off mid-conversation depending on your goals.

Try study mode yourself by selecting “Study and learn” from tools in ChatGPT and asking a question.


Introducing study mode — from openai.com
A new way to learn in ChatGPT that offers step by step guidance instead of quick answers.

[On 7/29/25, we introduced] study mode in ChatGPT—a learning experience that helps you work through problems step by step instead of just getting an answer. Starting today, it’s available to logged in users on Free, Plus, Pro, Team, with availability in ChatGPT Edu coming in the next few weeks.

ChatGPT is becoming one of the most widely used learning tools in the world. Students turn to it to work through challenging homework problems, prepare for exams, and explore new concepts. But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn’t just offer solutions without helping students make sense of them?

We’ve built study mode to help answer this question. When students engage with study mode, they’re met with guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding. Study mode is designed to be engaging and interactive, and to help students learn something—not just finish something.


 

Recurring Themes In Bob Ambrogi’s 30 Years of Legal Tech Reporting (A Guest Post By ChatGPT) — from lawnext.com by ChatGPT
#legaltech #innovation #law #legal #innovation #vendors #lawyers #lawfirms #legaloperations

  • Evolution of Legal Technology: From Early Web to AI Revolution
  • Challenges in Legal Innovation and Adoption
  • Law Firm Innovation vs. Corporate Legal Demand: Shifting Dynamics
  • Tracking Key Technologies and Players in Legal Tech
  • Access to Justice, Ethics, and Regulatory Reform

Also re: legaltech, see:

How LegalTech is Changing the Client Experience in 2025 — from techbullion.com by Uzair Hasan

A Digital Shift in Law
In 2025, LegalTech isn’t a trend—it’s a standard. Tools like client dashboards, e-signatures, AI legal assistants, and automated case tracking are making law firms more efficient and more transparent. These systems also help reduce errors and save time. For clients, it means less confusion and more control.

For example, immigration law—a field known for paperwork and long processing times—is being transformed through tech. Clients now track their case status online, receive instant updates, and even upload key documents from their phones. Lawyers, meanwhile, use AI tools to spot issues faster, prepare filings quicker, and manage growing caseloads without dropping the ball.

Loren Locke, Founder of Locke Immigration Law, explains how tech helps simplify high-stress cases:
“As a former consular officer, I know how overwhelming the visa process can feel. Now, we use digital tools to break down each step for our clients—timelines, checklists, updates—all in one place. One client recently told me it was the first time they didn’t feel lost during their visa process. That’s why I built my firm this way: to give people clarity when they need it most.”


While not so much legaltech this time, Jordan’s article below is an excellent, highly relevant posting for what we are going through — at least in the United States:

What are lawyers for? — from jordanfurlong.substack.com by Jordan Furlong
We all know lawyers’ commercial role, to be professional guides for human affairs. But we also need lawyers to bring the law’s guarantees to life for people and in society. And we need it right now.

The question “What are lawyers for?” raises another, prior and more foundational question: “What is the law for?”

But there’s more. The law also exists to regulate power in a society: to structure its distribution, create processes for its implementation, and place limits on its application. In a healthy society, power flows through the law, not around it. Certainly, we need to closely examine and evaluate those laws — the exercise of power through a biased or corrupted system will be illegitimate even if it’s “lawful.” But as a general rule, the law is available as a check on the arbitrary exercise of power, whether by a state authority or a private entity.

And above these two aspects of law’s societal role, I believe there’s also a third: to serve as a kind of “moral architecture” of society.

 

OpenAI’s Education Head Says Real Learning Takes Struggle—Not Just ChatGPT Help — from observer.com by Rachel Curry; via Ray Schroeder on LinkedIn
Students must struggle to learn, and offloading to ChatGPT risks weakening critical thinking skills, OpenAI’s head of education warns.

“We know that true learning takes friction. It takes struggle,” said Mills. “You have to engage with the materials, and if students offload all of that work to a tool like ChatGPT, they will not learn those skills and they will not gain that critical thinking. That said, when ChatGPT is used correctly as a learning assistant and as a tutor, the results are powerful.”

Given that 40 percent of ChatGPT users are under the age of 24—and that learning is the platform’s number one use case, according to Mills—the need to fine-tune guardrails is becoming increasingly urgent. Pew Research reports that twice as many teens now use ChatGPT for schoolwork compared to 2023, with nearly one-third of teen respondents saying it’s acceptable to use the tool to solve math problems.

In response, Mills said OpenAI is actively researching what appropriate A.I. use in education looks like, with plans to share that guidance widely and rapidly with educators around the world.

 

Osgoode’s new simulation-based learning tool aims to merge ethical and practical legal skills — from canadianlawyermag.com by Tim Wilbur
The designer speaks about his vision for redefining legal education through an innovative platform

The disconnection between legal education and the real world starkly contrasted with what he expected law school to be. “I thought rather naively…this would be a really interesting experience…linked to lawyers and what lawyers are doing in society…Far from it. It was solidly academic, so uninteresting, and I thought it’s got to be better than this.”

These frustrations inspired his work on simulation-based education, which seeks to produce “client-ready” lawyers and professionals who reflect deeply on their future roles. Maharg recently worked as a consultant with Osgoode Professional Development at Osgoode Hall Law School to design a platform that eschews many of the assumptions about legal education to deliver practical skills with real-world scenarios.

Osgoode’s SIMPLE platform – short for “simulated professional learning environment” – integrates case management systems and simulation engines to immerse students in practical scenarios.

“It’s actually to get them thinking hard about what they do when they act as lawyers and what they will do when they become lawyers…putting it into values and an ethical framework, as well as making it highly intensively practical,” Maharg says.


And speaking of legal training, also see:

AI in law firms should be a training tool, not a threat, for young lawyers — from canadianlawyermag.com by Tim Wilbur
Tech should free associates for deeper learning, not remove them from the process

AI is rapidly transforming legal practice. Today, tools handle document review and legal research at a pace unimaginable just a few years ago. As recent Canadian Lawyer reporting shows, legal AI adoption is outpacing expectations, especially among in-house teams, and is fundamentally reshaping how legal services are delivered.

Crucially, though, AI should not replace associates. Instead, it should relieve them of repetitive tasks and allow them to focus on developing judgment, client management, and strategic thinking. As I’ve previously discussed regarding the risks of banning AI in court, the future of law depends on blending technological fluency with the human skills clients value most.


Also, the following relates to legaltech as well:

Agentic AI in Legaltech: Proceed with Supervision! — from directory.lawnext.com by Ken Crutchfield
Semi-Autonomous agents can transform work if leaders maintain oversight

The term autonomous agents should raise some concern. I believe semi-autonomous agents is a better term. Do we really want fully autonomous agents that learn and interact independently, to find ways to accomplish tasks?

We live in a world full of cybersecurity risks. Bad actors will think of ways to use agents. Even well-intentioned systems could mishandle a task without proper guardrails.

Legal professionals will want to thoughtfully equip their agent technology with controlled access to the right services. Agents must be supervised, and training must be required for those using or benefiting from agents. Legal professionals will also want to expand the scope of AI Governance to include the oversight of agents.

Agentic AI will require supervision. Human review of Generative AI output is essential. Stating the obvious may be necessary, especially with agents. Controls, human review, and human monitoring must be part of the design and the requirements for any project. Leadership should not leave this to the IT department alone.

 
© 2025 | Daniel Christian