The AI Education Revolution — from linkedin.com by Whitney Kilgore
We’re witnessing the biggest shift in education since the textbook—and most institutions are still deciding whether to allow it.
ILTACON 2025: The Wild, Wild West of legal tech — from abajournal.com by Nicole Black
On the surface, ILTACON 2025, the International Legal Technology Association’s largest annual legal technology event, had all the makings of a great conference. But despite the thought-provoking sessions and keynotes, networking opportunities and PR fanfare, I couldn’t shake the sense that we were in the midst of a seismic shift in legal tech, surrounded by the restless energy of a boomtown.
…
The gold rush
It wasn’t ILTACON that bothered me; it was the heady, gold-rushed, “anything goes and whatever sticks works” environment that was unsettling. While this year’s conference was pirate-themed, it felt more like the Wild West to me.
This attitude permeated the conference, driven largely by the frenzied, frontier-style artificial intelligence revolution. The AI train is hurtling forward at lightning speed, destination unknown, and everyone is trying to cash in before it derails.
Two themes emerged from my discussions. First, no matter who you spoke to, “agentic AI,” meaning AI that autonomously takes purposeful actions, was a buzzword that cropped up often, whether during press briefings or over drinks. Another key trend was the race to become the generative AI home base for legal professionals.
— Nicole Black
“We are at the start of the biggest disruption to the legal profession in its history.”
— Steve Hasker, Thomson Reuters president and CEO
Also see:
Fresh Voices on Legal Tech with Bridget McCormack — from legaltalknetwork.com
Is AI the technology that will finally force lawyer tech competence? With rapid advances and the ability to address numerous problems and pain points in our legal systems, AI simply can’t be ignored. Dennis & Tom welcome Bridget McCormack to discuss her perspectives on current AI trends and other exciting new tech applications in legal…
Top Legal Tech Jobs on the Rise: Who Employers Are Looking For in 2025 — from lawyer-monthly.com
For professionals, this means one thing: dozens of new career paths are appearing on the horizon that did not exist five years ago.
What’s In Your Statement? — from aiedusimplified.substack.com by Lance Eaton, Ph.D.
Friendly reminder that there’s a Syllabi AI Policy Repository
AI Policy Resources
- AI Syllabi Policy Repository: 180+ policies
- AI Institutional Policy Repository: 17 policies
How Will AI Affect the Global Workforce? — from goldmansachs.com
- Despite concerns about widespread job losses, AI adoption is expected to have only a modest and relatively temporary impact on employment levels.
- Goldman Sachs Research estimates that unemployment will increase by half a percentage point during the AI transition period as displaced workers seek new positions.
- If current AI use cases were expanded across the economy and reduced employment proportionally to efficiency gains, an estimated 2.5% of US employment would be at risk of related job loss.
- Occupations with higher risk of being displaced by AI include computer programmers, accountants and auditors, legal and administrative assistants, and customer service representatives.
The Neuron recently highlighted the above item. Here is Grant Harvey’s take on that and other AI-related items:
- Goldman Sachs’ says AI’s job hit will be real… but thankfully, brief.
Goldman Sachs says AI will lift productivity with only brief job losses, which we think means it’s time to shift our work mindset from set roles to outcome-based delivery, leading to more small teams who win back local market share from slower moving corporations.
UK businesses are dialing back hiring for jobs that are likely to be affected by the rollout of artificial intelligence, a study found, suggesting the new technology is accentuating a slowdown in the nation’s labor market. Job vacancies have declined across the board in the UK as employers cut costs in the face of sluggish growth and high borrowing rates, with the overall number of online job postings down 31% in the three months to May compared with the same period in 2022, a McKinsey & Co. analysis found. Tiwa Adebayo joins Stephen Carroll on Bloomberg Radio to discuss the details.
I talked to Sam Altman about the GPT-5 launch fiasco – from theverge.com by Alex Heath
Over dinner, OpenAI CEO’s addressed criticism of GPT-5’s rollout, the AI bubble, brain-computer interfaces, buying Google Chrome, and more.
Sam Altman, over bread rolls, explores life after GPT-5 — from techcrunch.com by Maxwell Zeff
But throughout the night, it becomes clear to me that this dinner is about OpenAI’s future beyond GPT-5. OpenAI’s executives give the impression that AI model launches are less important than they were when GPT-4 launched in 2023. After all, OpenAI is a very different company now, focused on upending legacy players in search, consumer hardware, and enterprise software.
OpenAI shares some new details about those efforts.
The future of L&D is here, and it’s powered by AI. — from linkedin.com by Josh Cavalier
4 Ways I Use AI to Think Better — from wondertools.substack.com by Jeremy Caplan
How AI helps me learn, decide, and create
Learn something new.
Map out a personalized curriculum
Try this: Give an AI assistant context about what you want to learn, why, and how.
- Detail your rationale and motivation, which may impact your approach.
- Note your current knowledge or skill level, ideally with examples.
Summarize your learning preferences
- Note whether you prefer to read, listen to, or watch learning materials.
- Mention if you like quizzes, drills, or exercises you can do while commuting or during a break at work.
- If you appreciate learning games, task your AI assistant with generating one for you, using its coding capabilities detailed below.
- Ask for specific book, textbook, article, or learning path recommendations using the Web search or Deep Research capabilities of Perplexity, ChatGPT, Gemini or Claude. They can also summarize research literature about effective learning tactics.
- If you need a human learning partner, ask for guidance on finding one or language you can use in reaching out.
The Ends of Tests: Possibilities for Transformative Assessment and Learning with Generative AI
GPT-5 for Instructional Designers — from drphilippahardman.substack.com by Dr Philippa Hardman
10 Hacks to Work Smarter & Safer with OpenAI’s Latest Model
The TLDR is that as Instructional Designers, we can’t afford to miss some of the very real benefits of GPT-5’s potential, but we also can’t ensure our professional standards or learner outcomes if we blindly accept its outputs without due testing and validation.
For this reason, I decided to synthesise the latest GPT-5 research—from OpenAI’s technical documentation to independent security audits to real-world user testing—into 10 essential reality checks for using GPT-5 as an Instructional Designer.
These aren’t theoretical exercises; they’re practical tests designed to help you safely unlock GPT-5’s benefits while identifying and mitigating its most well-documented limitations.
Grammarly launches new specialist AI agents providing personalized assistance for students — from edtechinnovationhub.com by Rachel Lawler
Grammarly, an AI communication tool, has announced the launch of eight new specialized AI agents. The new assistants can support specific writing challenges such as finding credible sources and checking originality.
Students will now be offered “responsible AI support” through Grammarly, with the eight new agents:
- Reader Reactions agent …
- AI Grader agent …
- Citation Finder agent …
- Expert Review agent …
- Proofreader agent …
- AI Detector agent …
- Plagiarism Checker agent …
- Paraphraser agent …
Why Perplexity AI Is My Go-To Research Tool as a Higher Education CIO — from mikekentz.substack.com; a guest post from Michael Lyons, CIO at MassBay Community College
While I regularly use tools like ChatGPT, Grammarly, Microsoft Copilot, and even YouTube Premium (I would cancel Netflix before this), Perplexity has earned a top spot in my toolkit. It blends AI and real-time web search into one seamless, research-driven platform that saves time and improves the quality of information I rely on every day.
These ChatGPT Prompts Will Fast-Track Your Job Search — from builtin.com by Jeff Rumage
Used correctly, ChatGPT could help you land your dream job — but used incorrectly, it can cost you the offer. Here’s how you can make ChatGPT your secret weapon for research help, resume writing, interview prep and more.
Example prompt: Here are several bullet points from my resume: [paste bullets]. Rewrite them so each one begins with a strong action verb, clearly states what I did, and quantifies results or outcomes wherever possible. If metrics are missing, suggest realistic ways they could be added.
Example prompt: Here is my resume [paste resume]. Here’s the job description of a job I’m applying for [paste job description]. Highlight the most important skills and qualifications for this job. Without making up information, revise my resume to match these requirements. Include action verbs for each accomplishment on the resume, and highlight which accomplishments could be quantified.
Example prompt: What are the current trends impacting companies in the [industry]? How would [company name] be affected by these trends, and what might it do to adjust to/capitalize on these trends?
Example prompt: I’m a [current role] but want to become a [dream role]. Create a detailed career development plan outlining:
-
-
- Skills I should develop
- Relevant experiences I need to gain
- Educational or certification needs
- Recommended resources or programs
- A realistic timeline with milestones for the next 1-3 years.
-
Thomson Reuters CEO: Legal Profession Faces “Biggest Disruption in Its History”from AI — from lawnext.com by Bob Ambrogi
Thomson Reuters President and CEO Steve Hasker believes the legal profession is experiencing “the biggest disruption … in its history” due to generative and agentic artificial intelligence, fundamentally rewriting how legal work products are created for the first time in more than 300 years.
Speaking to legal technology reporters during ILTACON, the International Legal Technology Association’s annual conference, Hasker outlined his company’s ambitious goal to become “the most innovative company” in the legal tech sector while navigating what he described as unprecedented technological change affecting a profession that has remained largely unchanged since its origins in London tea houses centuries ago.
Legal tech hackathon challenges students to rethink access to justice — from the Centre for Innovation and Entrepreneurship, Auckland Law School
In a 24-hour sprint, student teams designed innovative tools to make legal and social support more accessible.
The winning team comprised of students of computer science, law, psychology and physics. They developed a privacy-first legal assistant powered by AI that helps people understand their legal rights without needing to navigate dense legal language.
Teaching How To ‘Think Like a Lawyer’ Revisited — from abovethelaw.com by Stephen Embry
GenAI gives the concept of training law students to think like a lawyer a whole new meaning.
Law Schools
These insights have particular urgency for legal education. Indeed, most of Cowen’s criticisms and suggested changes need to be front and center for law school leaders. It’s naïve to think that law student and lawyers aren’t going to use GenAI tools in virtually every aspect of their professional and personal lives. Rather than avoiding the subject or worse yet trying to stop use of these tools, law schools should make GenAI tools a fundamental part of research, writing and drafting training.
They need to focus not on memorization but on the critical thinking skills beginning lawyers used to get in the on-the-job training guild type system. As I discussed, that training came from repetitive and often tedious work that developed experienced lawyers who could recognize patterns and solutions based on the exposure to similar situations. But much of that repetitive and tedious work may go away in a GenAI world.
The Role of Adjunct Professors
But to do this, law schools need to better partner with actual practicing lawyers who can serve as adjunct professors. Law schools need to do away with the notion that adjuncts are second-class teachers.
It’s a New Dawn In Legal Tech: From Woodstock to ILTACON (And Beyond) — from lawnext.com by Bob Ambrogi
As someone who has covered legal tech for 30 years, I cannot remember there ever being a time such as this, when the energy and excitement are raging, driven by generative AI and a new era of innovation and new ideas of the possible.
…
But this year was different. Wandering the exhibit hall, getting product briefings from vendors, talking with attendees, it was impossible to ignore the fundamental shift happening in legal technology. Gen AI isn’t just creating new products – it is spawning entirely new categories of products that truly are reshaping how legal work gets done.
Agentic AI is the buzzword of 2025 and agentic systems were everywhere at ILTACON, promising to streamline workflows across all areas of legal practice. But, perhaps more importantly, these tools are also beginning to address the business side of running a law practice – from client intake and billing to marketing and practice management. The scope of transformation is now beginning to extend beyond the practice of law into the business of law.
…
Largely missing from this gathering were solo practitioners, small firm lawyers, legal aid organizations, and access-to-justice advocates – the very people who stand to benefit most from the democratizing potential of AI.
…
However, now more than ever, the innovations we are seeing in legal tech have the power to level the playing field, to give smaller practitioners access to tools and capabilities that were once prohibitively expensive. If these technologies remain priced for and marketed primarily to Big Law, we will have succeeded only in widening the justice gap rather than closing it.
How AI is Transforming Deposition Review: A LegalTech Q&A — from jdsupra.com
Thanks to breakthroughs in artificial intelligence – particularly in semantic search, multimodal models, and natural language processing – new legaltech solutions are emerging to streamline and accelerate deposition review. What once took hours or days of manual analysis now can be accomplished in minutes, with greater accuracy and efficiency than possible with manual review.
From Skepticism to Trust: A Playbook for AI Change Management in Law Firms — from jdsupra.com by Scott Cohen
Historically, lawyers have been slow adopters of emerging technologies, and with good reason. Legal work is high stakes, deeply rooted in precedent, and built on individual judgment. AI, especially the new generation of agentic AI (systems that not only generate output but initiate tasks, make decisions, and operate semi-autonomously), represents a fundamental shift in how legal work gets done. This shift naturally leads to caution as it challenges long-held assumptions about lawyer workflows and several aspects of their role in the legal process.
The path forward is not to push harder or faster, but smarter. Firms need to take a structured approach that builds trust through transparency, context, training, and measurement of success. This article provides a five-part playbook for law firm leaders navigating AI change management, especially in environments where skepticism is high and reputational risk is even higher.
ILTACON 2025: The vendor briefings – Agents, ecosystems and the next stage of maturity — from legaltechnology.com by Caroline Hill
This year’s ILTACON in Washington was heavy on AI, but the conversation with vendors has shifted. Legal IT Insider’s briefings weren’t about potential use cases or speculative roadmaps. Instead, they focused on how AI is now being embedded into the tools lawyers use every day — and, crucially, how those tools are starting to talk to each other.
Taken together, they point to an inflection point, where agentic workflows, data integration, and open ecosystems define the agenda. But it’s important amidst the latest buzzwords to remember that agents are only as good as the tools they have to work with, and AI only as good as its underlying data. Also, as we talk about autonomous AI, end users are still struggling with cloud implementations and infrastructure challenges, and need vendors to be business partners that help them to make progress at speed.
Harvey’s roadmap is all about expanding its surface area — connecting to systems like iManage, LexisNexis, and more recently publishing giant Wolters Kluwer — so that a lawyer can issue a single query and get synthesised, contextualised answers directly within their workflow. Weinberg said: “What we’re trying to do is get all of the surface area of all of the context that a lawyer needs to complete a task and we’re expanding the product surface so you can enter a search, search all resources, and apply that to the document automatically.”
The common thread: no one is talking about AI in isolation anymore. It’s about orchestration — pulling together multiple data sources into a workflow that actually reflects how lawyers practice.
5 Pitfalls Of Delaying Automation In High-Volume Litigation And Claims — from jdsupra.com
Why You Can’t Afford to Wait to Adopt AI Tools that have Plaintiffs Moving Faster than Ever
Just as photocopiers shifted law firm operations in the early 1970s and cloud computing transformed legal document management in the early 2000s, AI automation tools are altering the current legal landscape—enabling litigation teams to instantly structure unstructured data, zero in on key arguments in seconds, and save hundreds (if not thousands) of hours of manual work.
Your Firm’s AI Policy Probably Sucks: Why Law Firms Need Education, Not Rules — from jdsupra.com
The Floor, Not the Ceiling
Smart firms need to flip their entire approach. Instead of dictating which AI tools lawyers must use, leadership should set a floor for acceptable use and then get out of the way.
The floor is simple: no free versions for client work. Free tools are free because users are the product. Client data becomes training data. Confidentiality gets compromised. The firm loses any ability to audit or control how information flows. This isn’t about control; it’s about professional responsibility.
But setting the floor is only the first step. Firms must provide paid, enterprise versions of AI tools that lawyers actually want to use. Not some expensive legal tech platform that promises AI features but delivers complicated workflows. Real AI tools. The same ones lawyers are already using secretly, but with enterprise security, data protection, and proper access controls.
Education must be practical and continuous. Single training sessions don’t work. AI tools evolve weekly. New capabilities emerge constantly. Lawyers need ongoing support to experiment, learn, and share discoveries. This means regular workshops, internal forums for sharing prompts and techniques, and recognition for innovative uses.
The education investment pays off immediately. Lawyers who understand AI use it more effectively. They catch its mistakes. They know when to verify outputs. They develop specialized prompts for legal work. They become force multipliers, not just for themselves but for their entire teams.
Back to School in the AI Era: Why Students Are Rewriting Your Lesson Plan — from linkedin.com by Hailey Wilson
As a new academic year begins, many instructors, trainers, and program leaders are bracing for familiar challenges—keeping learners engaged, making complex material accessible, and preparing students for real-world application.
But there’s a quiet shift happening in classrooms and online courses everywhere.
This fall, it’s not the syllabus that’s guiding the learning experience—it’s the conversation between the learner and an AI tool.
From bootcamp to bust: How AI is upending the software development industry — from reuters.com by Anna Tong; via Paul Fain
Coding bootcamps have been a mainstay in Silicon Valley for more than a decade. Now, as AI eliminates the kind of entry-level roles for which they trained people, they’re disappearing.
Coding bootcamps have been a Silicon Valley mainstay for over a decade, offering an important pathway for non-traditional candidates to get six-figure engineering jobs. But coding bootcamp operators, students and investors tell Reuters that this path is rapidly disappearing, thanks in large part to AI.
“Coding bootcamps were already on their way out, but AI has been the nail in the coffin,” said Allison Baum Gates, a general partner at venture capital fund SemperVirens, who was an early employee at bootcamp pioneer General Assembly.
Gates said bootcamps were already in decline due to market saturation, evolving employer demand and market forces like growth in international hiring.
AI’s Impact on Early Talent: Building Today’s Education-to-Employment Systems for Tomorrow’s Workforce — from bhef.com by Kristen Fox and Madison Myers
To rise above the threshold, consider the skills that our board member and Northeastern University President Joseph Aoun outlines as essential literacies in Robot-Proof: Higher Education in the Age of Artificial Intelligence. In addition to technical and data literacies, he shares two key components of human literacy.
First, a set of “catalytic capacities” that include:
- Initiative and self-reliance
- Comfort with risk
- Flexibility and adaptability
Second, a set of “creative capacities” that include:
- Opportunity recognition, or the ability to see and experience problems as opportunities to create solutions
- Creative innovation, or the ability to create solutions without clearly defined structures
- Future innovation, or the disposition to orient toward future developments in society
The most effective approach to achieve these outcomes? Interdisciplinary models that embed skills flexibly across curriculum, that engage learners as part of networks, teams, and exploration, and that embed applied experiences in real-world contexts. Scott Carlson and Ned Laff have laid out some great examples of what this looks like in action in Hacking College.
The bottom line: the expectations of entry-level talent are rising while the systems to achieve that level of context and understanding are not necessarily keeping pace.
Bringing the best of AI to college students for free — from blog.google by Sundar Pichai
Millions of college students around the world are getting ready to start classes. To help make the school year even better, we’re making our most advanced AI tools available to them for free, including our new Guided Learning mode. We’re also providing $1 billion to support AI education and job training programs and research in the U.S. This includes making our AI and career training free for every college student in America through our AI for Education Accelerator — over 100 colleges and universities have already signed up.
…
Guided Learning: from answers to understanding
AI can broaden knowledge and expand access to it in powerful ways, helping anyone, anywhere learn anything in the way that works best for them. It’s not about just getting an answer, but deepening understanding and building critical thinking skills along the way. That opportunity is why we built Guided Learning, a new mode in Gemini that acts as a learning companion guiding you with questions and step-by-step support instead of just giving you the answer. We worked closely with students, educators, researchers and learning experts to make sure it’s helpful for understanding new concepts and is backed by learning science.
DC: If such a robot was dropped on your street with instructions to kill you and everyone else it encounters, how would you stop it?! https://t.co/nWq251BK5c
— Daniel S. Christian (@dchristian5) August 13, 2025
From DSC:
You and I both know that numerous militaries across the globe are working on killer robots equipped with AI. This is nothing new. But I don’t find this topic to be entertaining in the least. Because it could be part of how wars are fought in the near future. And most of us wouldn’t have a clue how to stop one of these things.
21 Ways People Are Using A.I. at Work — from nytimes.com by Larry Buchanan and Francesca Paris; this is a gifted article
- Select wines for restaurant menus
- Digitize a herbarium
- Make everything look better
- Create lesson plans that meet educational standards
- Make a bibliography
- Write up therapy plans
- …and many more
The GPT-5 fallout, explained… — from theneurondaily.com by Grant Harvey
PLUS: Who knew ppl loved 4o so much!?
The GPT-5 Backlash, Explained: OpenAI users revolted against GPT-5… then things got weird.
What a vibe shift a day or two makes, huh? As you all know by now, GPT-5 dropped last Thursday, and at first, it seemed like a pretty successful launch.
Early testers loved it. Sam Altman called it “the most powerful AI model ever made.”
Then the floodgates opened to 700 million users.. and all hell broke loose.
Here’s what happened: Within hours, Reddit and Twitter turned into digital pitchforks. The crime? OpenAI had quietly sunset GPT-4o—the model everyone apparently loved more than their morning coffee—without warning. Users weren’t just mad. They were devastated.
ChatGPT Changes — from getsuperintel.com by Kim “Chubby” Isenberg
4o is back, and Plus users get 3000 reasoning requests per week with GPT-5!
Who would have thought that the “smartest model ever” would trigger one of the loudest user revolts in AI history? The return of GPT-4o after only 24 hours shows how attached people are to the personality of their AI—and how quickly trust crumbles when expectations are not met. In this issue, we not only look at OpenAI’s response, but also at how the balance of power between developers and the community is shifting.
GPT-5 doesn’t dislike you—it might just need a benchmark for emotional intelligence — from link.wired.com by
Welcome to another AI Lab!
The backlash over the more emotionally neutral GPT-5 shows that the smartest AI models might have striking reasoning, coding, and math skills, but advancing their psychological intelligence safely remains very much unsolved.
…
Since the all-new ChatGPT launched on Thursday, some users have mourned the disappearance of a peppy and encouraging personality in favor of a colder, more businesslike one (a move seemingly designed to reduce unhealthy user behavior.) The backlash shows the challenge of building artificial intelligence systems that exhibit anything like real emotional intelligence.
Researchers at MIT have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users—in both positive and negative ways—in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.
ChatGPT is bringing back 4o as an option because people missed it — from theverge.com by Emma Roth
Many ChatGPT users were frustrated by OpenAI’s decision to make GPT-5 the default model.
OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.
“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”
For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.
AI Agent Trends of 2025: A Transformative Landscape — from marktechpost.com by Asif Razzaq
This articles focuses on five core AI agent trends for 2025: Agentic Retrieval-Augmented Generation (RAG), Voice Agents, AI Agent Protocols, DeepResearch Agents, Coding Agents, and Computer Using Agents (CUA).
BREAKING: Google introduces Guided Learning — from aieducation.substack.com by Claire Zau
Some thoughts on what could make Google’s AI tutor stand out
Another major AI lab just launched “education mode.”
Google introduced Guided Learning in Gemini, transforming it into a personalized learning companion designed to help you move from quick answers to real understanding.
Instead of immediately spitting out solutions, it:
- Asks probing, open-ended questions
- Walks learners through step-by-step reasoning
- Adapts explanations to the learner’s level
- Uses visuals, videos, diagrams, and quizzes to reinforce concepts
This Socratic style tutor rollout follows closely behind similar announcements like OpenAI’s Study Mode (last week) and Anthropic’s Claude for Education (April 2025).
How Sci-Fi Taught Me to Embrace AI in My Classroom — from edsurge.com by Dan Clark
I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.
My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.
Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.
How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.
AI Tools for Strategy and Research – GT #32 — from goodtools.substack.com by Robin Good
Getting expert advice, how to do deep research with AI, prompt strategy, comparing different AIs side-by-side, creating mini-apps and an AI Agent that can critically analyze any social media channel
In this issue, discover AI tools for:
- Getting Expert Advice
- Doing Deep Research with AI
- Improving Your AI Prompt Strategy
- Comparing Results from Different AIs
- Creating an AI Agent for Social Media Analysis
- Summarizing YouTube Videos
- Creating Mini-Apps with AI
- Tasting an Award-Winning AI Short Film
GPT-Building, Agentic Workflow Design & Intelligent Content Curation — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 3 recent job ads reveal about the changing nature of Instructional Design
In this week’s blog post, I’ll share my take on how the instructional design role is evolving and discuss what this means for our day-to-day work and the key skills it requires.
…
With this in mind, I’ve been keeping a close eye on open instructional design roles and, in the last 3 months, have noticed the emergence of a new flavour of instructional designer: the so-called “Generative AI Instructional Designer.”
Let’s deep dive into three explicitly AI-focused instructional design positions that have popped up in the last quarter. Each one illuminates a different aspect of how the role is changing—and together, they paint a picture of where our profession is likely heading.
Designers who evolve into prompt engineers, agent builders, and strategic AI advisors will capture the new premium. Those who cling to traditional tool-centric roles may find themselves increasingly sidelined—or automated out of relevance.
Google to Spend $1B on AI Training in Higher Ed — from insidehighered.com by Katherine Knott
Google’s parent company announced Wednesday (8/6/25) that it’s planning to spend $1 billion over the next three years to help colleges teach and train students about artificial intelligence.
Google is joining other AI companies, including OpenAI and Anthropic, in investing in AI training in higher education. All three companies have rolled out new tools aimed at supporting “deeper learning” among students and made their AI platforms available to certain students for free.
5 Predictions for How AI Will Impact Community Colleges — from pistis4edu.substack.com by Feng Hou
Based on current technology capabilities, adoption patterns, and the mission of community colleges, here are five well-supported predictions for AI’s impact in the coming years.
- Universal AI Tutor Access
- AI as Active Teacher
- Personalized Learning Pathways
- Interactive Multimodal Learning
- Value-Centric Education in an AI-Abundant World
GPT-5 is here — from openai.com
Our smartest, fastest, and most useful model yet, with thinking built in. Available to everyone.
Everything to know about GPT-5 — from theneurondaily.com by Grant Harvey
PLUS: We mean, really everything.
Why it matters: GPT-5 embodies a “team of specialists” approach—fast small models for most tasks, powerful ones for hard problems—reflecting NVIDIA’s “heterogeneous agentic system” vision. This could evolve into orchestration across dozens of specialized models, mirroring human collective intelligence.
Bottom line: GPT-5 isn’t AGI, but it’s a leap in usability, reliability, and breadth—pushing ChatGPT toward being a truly personal, expert assistant.
…and another article from Grant Harvey:
- GPT-5 is here… here’s everything you need to know (so far…).
OpenAI launched GPT-5—described as its most capable model to date—now in ChatGPT (with higher usage limits for paid tiers) and the API, bringing stronger reasoning/coding/math/writing and safety improvements, yet, per Sam Altman, still short of AGI.
OpenAI launches GPT-5 to all ChatGPT users — from therundown.ai by Rowan Cheung and Shubham Sharma
Why it matters: OpenAI’s move to replace its flurry of models with a unified GPT-5 simplifies user experience and gives everyone a PhD-level assistant, bringing elite problem-solving to the masses. The only question now is how long it can hold its edge in this fast-moving AI race, with Anthropic, Google, and Chinese giants all catching up.
OpenAI’s ChatGPT-5 released — from getsuperintel.com by Kim “Chubby” Isenberg
GPT-5’s release marks a new era of productivity, from specialized AI tool to universal intelligence partner
The Takeaway
- GPT-5’s unified architecture eliminates the effort of model switching and makes it the first truly seamless AI assistant that automatically applies the right level of reasoning for each task.
- With 45% fewer hallucinations and 94.6% accuracy on complex math problems, GPT-5 exceeds the reliability threshold required for business-critical applications.
- The model’s ability to generate complete applications from single prompts signals the democratization of software development and could revolutionize traditional coding workflows.
- OpenAI’s “Safe Completions” training approach represents a new paradigm in AI safety, providing nuanced responses instead of blanket rejections for dual-use scenarios.
GPT-5 is live – but the community is divided — from getsuperintel.com by Kim “Chubby” Isenberg
For some, it’s a lightning-fast creative partner; for others, it’s a system that can’t even decide when to think properly
Many had hoped that GPT-5 would finally unite all models – reasoning, image and video generation, voice – “one model to rule them all,” but this expectation has not been met.
I broke OpenAI’s new GPT-5 and you should too — Brainyacts #266 — from thebrainyacts.beehiiv.com by Josh Kubicki
GPT-5 marks a profound change in the human/machine relationship.
OBSERVATION #1: Up until yesterday, using OpenAI, you could pick the exact model variant for your task: the one tuned for reasoning, for writing, for code, or for math. Each had its own strengths, and experienced users learned which to reach for and when. In GPT-5, those choices are gone. There’s just “GPT-5,” and the routing decisions of which mode, which tool, which underlying approach is made by the model.
- For a beginner, that’s a blessing. Most novice users never knew the differences between the models anyway. They used the same one regardless of the task.
- For an experienced user, the jury’s still out. On one hand, the routing could save time. On the other, it introduces unpredictability: you can no longer reliably choose the optimal model for your purpose. If GPT-5’s choice is wrong, you’re stuck re-prompting rather than switching.
GPT-5 learns from you — from theaivalley.com by Barsee
Why it matters:
GPT-5 signals a shift in AI’s evolution: progress through refinement, not revolution. While benchmarks show incremental gains, the real win is accessibility. Cheaper models (from $0.05 per million tokens) make AI more available to developers and casual users. This aligns with the Jevons Paradox, where lower costs could drive explosive adoption. However, the AGI hype faces reality checks. Fundraising may slow as investors focus on real-world utility over moonshots. For now, GPT-5 strengthens OpenAI’s market lead, proving AI’s future lies in practical applications, not just raw power.
GPT-5: It Just Does Stuff — from oneusefulthing.org by Ethan Mollick
Putting the AI in Charge
I have had early access to GPT-51, and I wanted to give you some impressions of what it is and what it means. But the answer to the first question everyone will have is: Yes, this is a big deal. I asked GPT-5 to show you why by pasting this intro paragraph, exactly as you’re reading it, into ChatGPT, including the phrase: “you are GPT-5, do something very dramatic to illustrate my point, it has to fit into the next paragraph.”
That was the only prompt I gave it. It “thought” for 24 seconds and wrote:
Thunderstruck. Here, watch. I build worlds. See ideas become instruments. I code, compose, and converse. Stories synthesize science; solutions spark, swiftly. Asks arrive; answers appear, astonishingly, across domains. Behold: business plans, blueprints, briefs, beautifully built, briskly. I infer intent, integrate insight, and iterate instantly, indefatigably. Gaps get gauged; goals get generated, guarded, and guaranteed gracefully. During dialogue, disparate data distills down; decisions develop deliberately, deftly, dependably. Everything explained efficiently, even ethics: evidence examined, empathy engaged, everywhere, everyday, equitably. All at once, ambiguous arguments align as actionable, accurate advice awaits anyone anywhere. Look: language, logic, learning, and long-term memory link, lifting lives and labor limitlessly worldwide.
One-size-fits-all learning is about to become completely obsolete. — from linkedin.com by Allie Miller
AI in the University: From Generative Assistant to Autonomous Agent This Fall — from insidehighered.com by
This fall we are moving into the agentic generation of artificial intelligence.
“Where generative AI creates, agentic AI acts.” That’s how my trusted assistant, Gemini 2.5 Pro deep research, describes the difference.
…
Agents, unlike generative tools, create and perform multistep goals with minimal human supervision. The essential difference is found in its proactive nature. Rather than waiting for a specific, step-by-step command, agentic systems take a high-level objective and independently create and execute a plan to achieve that goal. This triggers a continuous, iterative workflow that is much like a cognitive loop. The typical agentic process involves six key steps, as described by Nvidia:
AI in Education Podcast — from aipodcast.education by Dan Bowen and Ray Fleming
The State of AI in Education 2025 Key Findings from a National Survey — from Carnegie Learning
Our 2025 national survey of over 650 respondents across 49 states and Puerto Rico reveals both encouraging trends and important challenges. While AI adoption and optimism are growing, concerns about cheating, privacy, and the need for training persist.
Despite these challenges, I’m inspired by the resilience and adaptability of educators. You are the true game-changers in your students’ growth, and we’re honored to support this vital work.
This report reflects both where we are today and where we’re headed with AI. More importantly, it reflects your experiences, insights, and leadership in shaping the future of education.
Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences within Canvas — from instructure.com
This groundbreaking collaboration represents a transformative step forward in education technology and will begin with, but is not limited to, an effort between Instructure and OpenAI to enhance the Canvas experience by embedding OpenAI’s next-generation AI technology into the platform.
IgniteAI announced earlier today, establishes Instructure’s future-ready, open ecosystem with agentic support as the AI landscape continues to evolve. This partnership with OpenAI exemplifies this bold vision for AI in education. Instructure’s strategic approach to AI emphasizes the enhancement of connections within an educational ecosystem comprising over 1,100 edtech partners and leading LLM providers.
“We’re committed to delivering next-generation LMS technologies designed with an open ecosystem that empowers educators and learners to adapt and thrive in a rapidly changing world,” said Steve Daly, CEO of Instructure. “This collaboration with OpenAI showcases our ambitious vision: creating a future-ready ecosystem that fosters meaningful learning and achievement at every stage of education. This is a significant step forward for the education community as we continuously amplify the learning experience and improve student outcomes.”
Faculty Latest Targets of Big Tech’s AI-ification of Higher Ed — from insidehighered.com by Kathryn Palmer
A new partnership between OpenAI and Instructure will embed generative AI in Canvas. It may make grading easier, but faculty are skeptical it will enhance teaching and learning.
The two companies, which have not disclosed the value of the deal, are also working together to embed large language models into Canvas through a feature called IgniteAI. It will work with an institution’s existing enterprise subscription to LLMs such as Anthropic’s Claude or OpenAI’s ChatGPT, allowing instructors to create custom LLM-enabled assignments. They’ll be able to tell the model how to interact with students—and even evaluate those interactions—and what it should look for to assess student learning. According to Instructure, any student information submitted through Canvas will remain private and won’t be shared with OpenAI.
…
Faculty Unsurprised, Skeptical
Few faculty were surprised by the Canvas-OpenAI partnership announcement, though many are reserving judgment until they see how the first year of using it works in practice.






