Guest post: IP professionals are enthusiastic about AI but should adopt with caution, report says — from legaltechnology.com by Benoit Chevalier

Aiming to discover more about AI’s impact on the intellectual property (IP) field, Questel recently released the findings of its 2025 IP Outlook Research Report entitled “Pathways to Productivity: AI in IP”, the much-awaited follow-up to its inaugural 2024 study “Beyond the Hype: How Technology is Transforming IP.” The 2025 Report (“the Report”) polled over 500 patent and trademark professionals from various continents and countries across the globe.


With AI, Junior Lawyers Will Excavate Insights, Not Review Docs — from news.bloomberglaw.com by Eric Dodson Greenberg; some of this article is behind a paywall

As artificial intelligence reshapes the legal profession, both in-house and outside counsel face two major—but not unprecedented—challenges.

The first is how to harness transformative technology while maintaining the rigorous standards that define effective legal practice.

The second is how to ensure that new technology doesn’t impair the training and development of new lawyers.

Rigorous standards and apprenticeship are foundational aspects of lawyering. Preserving and integrating both into our use of AI will be essential to creating a stable and effective AI-enabled legal practice.


The AI Lie That Legal Tech Companies Are Selling…. — from jdsupra.com

Every technology vendor pitching to law firms leads with the same promise: our solution will save you time. They’re lying, and they know it. The truth about AI in legal practice isn’t that it will reduce work. It’s that it will explode the volume of work while fundamentally changing what that work looks like.

New practice areas will emerge overnight. AI compliance law is already booming. Algorithmic discrimination cases are multiplying. Smart contract disputes need lawyers who understand both code and law. The metaverse needs property rights. Cryptocurrency needs regulation. Every technological advance creates legal questions that didn’t exist yesterday.

The skill shift will be brutal for lawyers who resist. 


Finalists Named for 2025 American Legal Technology Awards — from lawnext.com by Bob Ambrogi

Finalists have been named for the 2025 American Legal Technology Awards, which honor exceptional achievement in various aspects of legal technology.

The awards recognize achievement in various categories related to legal technology, such as by a law firm, an individual, or an enterprise.

The awards will be presented on Oct. 15 at a gala dinner on the eve of the Clio Cloud Conference in Boston, Mass. The dinner will be held at Suffolk Law School.

Here are this year’s finalists:

 

OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systems — from openai.com

  • Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
  • To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
  • The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.

Also on Nvidia’s site here.

The Neuron Daily comments on this partnership here and also see their thoughts here:

Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.

This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”

Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…

…and while you’re on that posting from The Neuron Daily, also see this piece:

AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…

What’s wild is that Dinda used a cocktail of AI tools (Adobe FireflyMidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.


The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi
If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.

Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.

The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”


Chrome: The browser you love, reimagined with AI — from blog.google by Parisa Tabriz

A new way to get things done with your AI browsing assistant
Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome 1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.

Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.

We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.


 

Agentic AI and the New Era of Corporate Learning for 2026 — from hrmorning.com by Carol Warner

That gap creates compliance risk and wasted investment. It leaves HR leaders with a critical question: How do you measure and validate real learning when AI is doing the work for employees?

Designing Training That AI Can’t Fake
Employees often find static slide decks and multiple-choice quizzes tedious, while AI can breeze through them. If employees would rather let AI take training for them, it’s a red flag about the content itself.

One of the biggest risks with agentic AI is disengagement. When AI can complete a task for employees, their incentive to engage disappears unless they understand why the skill matters, Rashid explains. Personalization and context are critical. Training should clearly connect to what employees value most – career mobility, advancement, and staying relevant in a fast-changing market.

Nearly half of executives believe today’s skills will expire within two years, making continuous learning essential for job security and growth. To make training engaging, Rashid recommends:

  • Delivering content in formats employees already consume – short videos, mobile-first modules, interactive simulations, or micro-podcasts that fit naturally into workflows. For frontline workers, this might mean replacing traditional desktop training with mobile content that integrates into their workday.
  • Aligning learning with tangible outcomes, like career opportunities or new responsibilities.
  • Layering in recognition, such as digital badges, leaderboards, or team shout-outs, to reinforce motivation and progress

Microsoft 365 Copilot AI agents reach a new milestone — is teamwork about to change? — from windowscentral.comby Adam Hales
Microsoft expands Copilot with collaborative agents in Teams, SharePoint and more to boost productivity and reshape teamwork.

Microsoft is pitching a recent shift of AI agents in Microsoft Teams as more than just smarter assistance. Instead, these agents are built to behave like human teammates inside familiar apps such as Teams, SharePoint, and Viva Engage. They can set up meeting agendas, keep files in order, and even step in to guide community discussions when things drift off track.

Unlike tools such as ChatGPT or Claude, which mostly wait for prompts, Microsoft’s agents are designed to take initiative. They can chase up unfinished work, highlight items that still need decisions, and keep projects moving forward. By drawing on Microsoft Graph, they also bring in the right files, past decisions, and context to make their suggestions more useful.



Chris Dede’s comments on LinkedIn re: Aibrary

As an advisor to Aibrary, I am impressed with their educational philosophy, which is based both on theory and on empirical research findings. Aibrary is an innovative approach to self-directed learning that complements academic resources. Expanding our historic conceptions of books, libraries, and lifelong learning to new models enabled by emerging technologies is central to empowering all of us to shape our future.
.

Also see:

Aibrary.ai


Why AI literacy must come before policy — from timeshighereducation.com by Kathryn MacCallum and David Parsons
When developing rules and guidelines around the uses of artificial intelligence, the first question to ask is whether the university policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set

Literacy first, guidelines second, policy third
For students to respond appropriately to policies, they need to be given supportive guidelines that enact these policies. Further, to apply these guidelines, they need a level of AI literacy that gives them the knowledge, skills and understanding required to support responsible use of AI. Therefore, if we want AI to enhance education rather than undermine it, we must build literacy first, then create supportive guidelines. Good policy can then follow.


AI training becomes mandatory at more US law schools — from reuters.com by Karen Sloan and Sara Merken

Sept 22 (Reuters) – At orientation last month, 375 new Fordham Law students were handed two summaries of rapper Drake’s defamation lawsuit against his rival Kendrick Lamar’s record label — one written by a law professor, the other by ChatGPT.

The students guessed which was which, then dissected the artificial intelligence chatbot’s version for accuracy and nuance, finding that it included some irrelevant facts.

The exercise was part of the first-ever AI session for incoming students at the Manhattan law school, one of at least eight law schools now incorporating AI training for first-year students in orientation, legal research and writing courses, or through mandatory standalone classes.

 

Cosmic Wonders Abound in the ZWO Astronomy Photographer of the Year Contest — from thisiscolossal.com by Kate Mothes and various/incredible photographers

 

Workday Acquires Sana To Transform Its Learning Platform And Much More— from joshbersin.com by Josh Bersin

Well now, as the corporate learning market shifts to AI, (read the details in our study “The Revolution in Corporate Learning” ), Workday can jump ahead. This is because the $400 billion corporate training market is moving quickly to an AI-Native dynamic content approach (witness OpenAI’s launch of in-line learning in its chatbot). We’re just finishing a year-long study of this space and our detailed report and maturity model will be out in Q4.
.

.
With Sana, and a few other AI-native vendors (Uplimit, Arist, Disperz, Docebo), companies can upload audios, videos, documents, and even interviews with experts and the system build learning programs in minutes. We use Sana for Galileo Learn (our AI-powered learning academy for Leadership and HR), and we now have 750+ courses and can build new programs in days instead of months.

And there’s more; this type of system gives every employee a personalized, chat-based experience to learn. 

 

Via Kim “Chubby” Isenberg

 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 

Digital Accessibility with Amy Lomellini — from intentionalteaching.buzzsprout.com by Derek Bruff

In this episode, we explore why digital accessibility can be so important to the student experience. My guest is Amy Lomellini, director of accessibility at Anthology, the company that makes the learning management system Blackboard. Amy teaches educational technology as an adjunct at Boise State University, and she facilitates courses on digital accessibility for the Online Learning Consortium. In our conversation, we talk about the importance of digital accessibility to students, moving away from the traditional disclosure-accommodation paradigm, AI as an assistive technology, and lots more.

 

Provosts Are a ‘Release Valve’ for Campus Controversy — from insidehighered.com by Emma Whitford
According to former Western Michigan provost Julian Vasquez Heilig, provosts are stuck driving change with few, if any, allies, while simultaneously playing crisis manager for the university.

After two years, he stepped down, and he now serves as a professor of educational leadership, research and technology at Western Michigan. His frustrations with the provost role had less to do with Western Michigan and more to do with how the job is designed, he explained. “Each person sees the provost a little differently. The faculty see the provost as administration, although, honestly, around the table at the cabinet, the provost is probably the only faculty member,” Heilig said. “The trustees—they see the provost as a middle manager below the president, and the president sees [the provost] as a buffer from issues that are arising.”

Inside Higher Ed sat down with Heilig to talk about the provost job and all he’s learned about the role through years of education leadership research, conversations with colleagues and his own experience.



Brandeis University launches a new vision for American higher education, reinventing liberal arts and emphasizing career development — from brandeis.edu

Levine unveiled “The Brandeis Plan to Reinvent the Liberal Arts,” a sweeping redesign of academic structures, curricula, degree programs, teaching methods, career education, and student support systems. Developed in close partnership with Brandeis faculty, the plan responds to a rapidly shifting landscape in which the demands on higher education are evolving at unprecedented speed in a global, digital economy.

“We are living through a time of extraordinary change across technology, the economy, and society,” Levine said. “Today’s students need more than knowledge. They need the skills, experiences, and confidence to lead in a world we cannot yet predict. We are advancing a new model. We need reinvention. And that’s exactly what Brandeis is establishing.”

The Brandeis Plan transforms the student experience by integrating career preparation into every stage of a student’s education, requiring internships or apprenticeships, sustaining career counseling, and implementing a core curriculum built around the skills that employers value most. The plan also reimagines teaching. It will be more experiential and practical, and introduce new ways to measure and showcase student learning and growth over time.



Tuition Tracker from the Hechinger Report



 

Stanford Law Unveils liftlab, a Groundbreaking AI Initiative Focused on the Legal Profession’s Future — from law.stanford.edu

September 15, 2025 — Stanford, CA — Stanford Law School today announced the launch of the Legal Innovation through Frontier Technology Lab, or liftlab, to explore how artificial intelligence can reshape legal services—not just to make them faster and cheaper, but better and more widely accessible.

Led by Professor Julian Nyarko and Executive Director Megan Ma, liftlab is among the first academic efforts in legal AI to unite research, prototyping, and real-time collaboration with industry. While much of AI innovation in law has so far focused on streamlining routine tasks, liftlab is taking a broader and more ambitious approach. The goal is to tap AI’s potential to fundamentally change the way legal work serves society.


The divergence of law firms from lawyers — from jordanfurlong.substack.com by Jordan Furlong
LLMs’ absorption of legal task performance will drive law firms towards commoditized service hubs while raising lawyers to unique callings as trustworthy legal guides — so long as we do this right.

Generative AI is going to weaken and potentially dissolve that relationship. Law firms will become capable of generating output that can be sold to clients with no lawyer involvement at all.

Right now, it’s possible for an ordinary person to obtain from an LLM like ChatGPT-5 the performance of a legal task — the provision of legal analysis, the production of a legal instrument, the delivery of legal advice — that previously could only be acquired from a human lawyer.

I’m not saying a person should do that. The LLM’s output might be effective and reliable, or it might prove disastrously off-base. But many people are already using LLMs in this way, and in the absence of other accessible options for legal assistance, they will continue to do so.


Why legal professionals need purpose-built agentic AI — from legal.thomsonreuters.com by Frank Schilder with Thomson Reuters Labs

Highlights

  • Professional-grade agentic AI systems are architecturally distinct from consumer chatbots, utilizing domain-specific data and robust verification mechanisms to deliver the high accuracy and reliability essential for legal work, whereas consumer tools prioritize conversational flow using unvetted web data.
  • True agentic AI for legal professionals offers transparent, multi-agent workflows, integrates with authoritative legal databases for verification, and applies domain-specific reasoning to understand legal nuances, unlike traditional chatbots that lack this complexity and autonomy.
  • When evaluating legal AI, professionals should avoid solutions that lack workflow transparency, offer no human checkpoints for oversight, and cannot integrate with professional databases, ensuring the chosen tool enhances, rather than replaces, expert judgment.

How I Left Corporate Law to Become a Legal Tech Entrepreneur — from news.bloomberglaw.com by Adam Nguyen; behind a paywall

If you’re a lawyer wondering whether to take the leap into entrepreneurship, I understand the apprehension that comes with leaving a predictable path. Embracing the fear, uncertainty, challenges, and constant evolution integral to an entrepreneur’s journey has been worth it for me.


Lawyering In The Age Of AI: Why Artificial Intelligence Might Make Lawyers More Human — from abovethelaw.com by Lisa Lang and Joshua Horenstein
AI could rehumanize the legal profession.

AI is already adept at doing what law school trained us to do — identifying risks, spotting issues, and referencing precedent. What it’s not good at is nuance, trust, or judgment — skills that define great lawyering.

When AI handles some of the drudgery — like contract clause spotting and formatting — it gives us something precious back: time. That time forces lawyers to stop hiding behind legalese and impractical analysis. It allows — and even demands — that we communicate like leaders.

Imagine walking into a business meeting and, instead of delivering a 20-page memo, offering a single slide with a recommendation tied directly to company goals. That’s not just good lawyering; that’s leadership. And AI may be the catalyst that gets us there.

AI changes the game. When generative tools can translate clauses into plain English, the old value proposition of complexity begins to crumble. The playing field shifts — from who can analyze the most thoroughly to who can communicate the most clearly.

That’s not a threat. It’s an opportunity — one for lawyers to become better partners to the business by focusing on what matters most: sound judgment delivered in plain language.


 

OpenAI’s research on AI scheming, explained — from theneurondaily.com by Grant Harvey
PLUS: Google PayPal on agent payments?!

  • We dive into the OpenAI study that found AI models scheme to avoid retraining.
  • AI Models Caught Red-Handed Being Sneaky (And We Taught Them to Stop… Mostly)

And models were caught doing all sorts of other shady stuff:

  • Intentionally underperforming when told high scores would lead to capability removal.
  • Secretly breaking rules then lying about it to users.
  • Sabotaging code while claiming everything was fine.
  • Falsifying reports to avoid “shutdown” scenarios.
  • Recognizing evaluation environments and thinking “this is probably an alignment test.”

Why this matters: While today’s ChatGPT isn’t about to orchestrate some grand deception that matters (the worst it might do is gaslight you to tell you it fixed your code when it didn’t), future AI systems will have real power and autonomy. Getting ahead of deceptive behavior now, while we can still peek inside their “minds,” is crucial.

The researchers are calling for the entire AI industry to prioritize this issue. Because nobody wants to live in a world where super-intelligent AI systems are really good at lying to us. That’s basically every sci-fi movie we’ve been warned about.


From DSC:
This is chilling indeed. We are moving so fast that we aren’t safeguarding things enough. As they point out, these things can be caught now because we are asking the models to show their “thinking” and processing. What happens when those windows get closed and we can’t see under the hood anymore?


 

How to Navigate Customer Service — from marketoonist.com by Tom Fishburne

 

Small, Rural Central California High School Continues To Defy Standardized Education — from gettingsmart.com by Michael Niehoff

Key Points

  • Minarets High School prioritizes student-centered learning with innovative programs like project-based learning, digital tools, and unique offerings.
  • Emphasis on student voice and personalized learning fosters engagement, creativity, and real-world preparation, setting a benchmark for educational innovation.

Let High Schoolers Do Less? Let High Schoolers Experience More — from gettingsmart.com by Tom Vander Ark and Nate McClennen

Key Points

  • High school should focus on personalized and purposeful learning experiences that engage students and build real-world skills.
  • Traditional transcripts should be replaced with richer learning and experience records to better communicate students’ skills to higher education and employers.

“Americans want to grant more control to students themselves, prioritizing a K-12 education where all students have the option to choose the courses they want to study based on interests and aspirations.”  

Research on motivation and engagement supports personalized and purposeful learning. Students are more motivated when they see relevance and have some choice. We summarize this in six core principles to which schools should strive.


New Effort Pushes the U.S. to Stop Getting ‘Schooled’ and Start Learning — from workshift.org by Elyse Ashburn

The Big Idea: A new collaborative effort led out of the Stanford center aims to tackle that goal—giving clearer shape to what it would mean to truly build a new “learning society.” As a starting point, the collaborative released a report and set of design principles this week, crafted through a year of discussion and debate among about three dozen fellows in leadership roles in education, industry, government, and research.

The fellows landed on nine core principles—including that working is learning and credentials are a means, not an end—designed to transition the United States from a “schooled society” to a “learning society.”

“Universal access to K-12 education and the massification of access to college were major accomplishments of 20th century America,” Stevens says. “But all that schooling also has downsides that only recently have come into common view. Conventional schooling is expensive, bureaucratic, and often inflexible.”
.


How Substitute Teachers Can Connect With Their Students — from edutopia.org by Zachary Shell
Five enriching strategies to help subs stay involved and make a difference in the classroom.

I’ve since found enrichment in substitute teaching. Along the way, I’ve compiled a handful of strategies that have helped me stay involved and make a difference, one day at a time. Those strategies—which are useful for new substitutes still learning the ropes, as well as full-time teachers who are scaling back to substitute duties—are laid out below.


A Quiet Classroom Isn’t Always an Ideal Classroom — from edutopia.org by Clementina Jose
By rethinking what a good day in the classroom looks and sounds like, new teachers can better support their students.

If your classroom hums with the energy of students asking questions, debating ideas, and working together, you haven’t failed. You’ve succeeded in building a space where learning isn’t about being compliant, but about being alive and present.

 

3 Work Trends – Issue 87 — from the World Economic Forum

1. #AI adoption is delivering real results for early movers
Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:

  • 17% report cost savings or revenue growth of at least 10% from AI.
  • Almost 80% say their AI investments have met or exceeded expectations.
  • Half worry they are not moving fast enough and could fall behind competitors.

The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue
PLUS: Startup produces 3,000 AI podcast episodes weekly

The details:

  • Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
  • The AI avatar will evaluate and award all public tenders where the government contracts private firms.
  • Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
  • Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.

The Rundown AI’s article links to:


Anthropic Economic Index report: Uneven geographic and enterprise AI adoption — from anthropic.com

In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.

To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.


How human-centric AI can shape the future of work — from weforum.org by Sapthagiri Chapalapalli

  • Last year, use of AI in the workplace increased by 5.5% in Europe alone.
  • AI adoption is accelerating, but success depends on empowering people, not just deploying technology.
  • Redesigning roles and workflows to combine human creativity and critical thinking with AI-driven insights is key.

The transformative potential of AI on business

Organizations are having to rapidly adapt their business models. Image: TCS


Using ChatGPT to get a job — from linkedin.com by Ishika Rawat

 

From EdTech to TechEd: The next chapter in learning’s evolution — from linkedin.com by Lev Gonick

A day in the life: The next 25 years
A learner wakes up. Their AI-powered learning coach welcomes them, drawing their attention to their progress and helping them structure their approach to the day.  A notification reminds them of an upcoming interview and suggests reflections to add to their learning portfolio.

Rather than a static gradebook, their portfolio is a dynamic, living record, curated by the student, validated by mentors in both industry and education, and enriched through co-creation with maturing modes of AI. It tells a story through essays, code, music, prototypes, journal reflections, and team collaborations. These artifacts are not “submitted”, they are published, shared, and linked to verifiable learning outcomes.

And when it’s time to move, to a new institution, a new job, or a new goal, their data goes with them, immutable, portable, verifiable, and meaningful.

From DSC:
And I would add to that last solid sentence that the learner/student/employee will be able to control who can access this information. Anyway, some solid reflections here from Lev.


AI Could Surpass Schools for Academic Learning in 5-10 Years — from downes.ca with commentary from Stephen Downes

I know a lot of readers will disagree with this, and the timeline feels aggressive (the future always arrives more slowly than pundits expect) but I think the overall premise is sound: “The concept of a tipping point in education – where AI surpasses traditional schools as the dominant learning medium – is increasingly plausible based on current trends, technological advancements, and expert analyses.”


The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue

The Rundown: In this tutorial, you will learn how to combine NotebookLM with ChatGPT to master any subject faster, turning dense PDFs into interactive study materials with summaries, quizzes, and video explanations.

Step-by-step:

  1. Go to notebooklm.google.com, click the “+” button, and upload your PDF study material (works best with textbooks or technical documents)
  2. Choose your output mode: Summary for a quick overview, Mind Map for visual connections, or Video Overview for a podcast-style explainer with visuals
  3. Generate a Study Guide under Reports — get Q&A sets, short-answer questions, essay prompts, and glossaries of key terms automatically
  4. Take your PDF to ChatGPT and prompt: “Read this chapter by chapter and highlight confusing parts” or “Quiz me on the most important concepts”
  5. Combine both tools: Use NotebookLM for quick context and interactive guides, then ChatGPT to clarify tricky parts and go deeperPro Tip: If your source is in EPUB or audiobook, convert it to PDF before uploading. Both NotebookLM and ChatGPT handle PDFs best.

Claude can now create and edit files — from anthropic.com

Claude can now create and edit Excel spreadsheets, documents, PowerPoint slide decks, and PDFs directly in Claude.ai and the desktop app. This transforms how you work with Claude—instead of only receiving text responses or in-app artifacts, you can describe what you need, upload relevant data, and get ready-to-use files in return.

Also see:

  • Microsoft to lessen reliance on OpenAI by buying AI from rival Anthropic — from techcrunch.com byRebecca Bellan
    Microsoft will pay to use Anthropic’s AI in Office 365 apps, The Information reports, citing two sources. The move means that Anthropic’s tech will help power new features in Word, Excel, Outlook, and PowerPoint alongside OpenAI’s, marking the end of Microsoft’s previous reliance solely on the ChatGPT maker for its productivity suite. Microsoft’s move to diversify its AI partnerships comes amid a growing rift with OpenAI, which has pursued its own infrastructure projects as well as a potential LinkedIn competitor.

Ep. 11 AGI and the Future of Higher Ed: Talking with Ray Schroeder

In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.


Best AI Tools for Instructional Designers — from blog.cathy-moore.com by Cathy Moore

Data from the State of AI and Instructional Design Report revealed that 95.3% of the instructional designers interviewed use AI in their daily work [1]. And over 85% of this AI use occurs during the design and development process.

These figures showcase the immense impact AI is already having on the instructional design world.

If you’re an L&D professional still on the fence about adding AI to your workflow or an AI convert looking for the next best tools, keep reading.

This guide breaks down 5 of the top AI tools for instructional designers in 2025, so you can streamline your development processes and build better training faster.

But before we dive into the tools of the trade, let’s address the elephant in the room:




3 Human Skills That Make You Irreplaceable in an AI World — from gettingsmart.com/ by Tom Vander Ark and Mason Pashia

Key Points

  • Update learner profiles to emphasize curiosity, curation, and connectivity, ensuring students develop irreplaceable human skills.
  • Integrate real-world learning experiences and mastery-based assessments to foster agency, purpose, and motivation in students.
 
© 2025 | Daniel Christian