You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand.
Could Your Next Side Hustle Be Training AI? — from builtin.com by Jeff Rumage As automation continues to reshape the labor market, some white-collar professionals are cashing in by teaching AI models to do their jobs.
Summary: Artificial intelligence may be replacing jobs, but it’s also creating some new ones. Professionals in fields like medicine, law and engineering can earn big money training AI models, teaching them human skills and expertise that may one day make those same jobs obsolete.
Here’s the thing: voice is finally good enough to replace typing now. And I mean actually good enough, not “Siri, play Despacito” good enough.
To Paraphrase Andrej Karpathy’s famous quote, “the hottest new programming language is English”, in this case, the hottest new user interface is talking.
The Great Convergence: Why Voice Is Having Its Moment Three massive shifts just collided to make voice interfaces inevitable.
First, speech recognition stopped being terrible. …
Second, our devices got ears everywhere. …
Third, and most importantly: LLMs made voice assistants smart enough to be worth talking to. …
Update on November 20, 2025: Early feedback from the pilot has been positive, so we’re expanding group chats to all logged-in users on ChatGPT Free, Go, Plus and Pro plans globally over the coming days. We will continue refining the experience as more people start using it.
Today, we’re beginning to pilot a new experience in a few regions that makes it easy for people to collaborate with each other—and with ChatGPT—in the same conversation. With group chats, you can bring friends, family, or coworkers into a shared space to plan, make decisions, or work through ideas together.
Whether you’re organizing a group dinner or drafting an outline with coworkers, ChatGPT can help. Group chats are separate from your private conversations, and your personal ChatGPT memory is never shared with anyone in the chat.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD On Custom Instructions with GenAI Tools….
I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.
I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!
I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.
Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.
In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.
This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
From DSC: The above item was from The Rundown AI, who wrote the following:
The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.
The details:
The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.
Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.
We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.
Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.
This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.
If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…
Free Music Discovery Tools — from wondertools.substack.com by Jeremy Caplan and Chris Dalla Riva Travel through time and around the world with sound
I love apps like Metronaut and Tomplay, which let me carry a collection of classical (sheet) music on my phone. They also provide piano or orchestral accompaniment for any violin piece I want to play.
Today’s post shares 10 other recommended tools for music lovers from my fellow writer and friend, Chris Dalla Riva, who writes Can’t Get Much Higher, a popular Substack focused on the intersection of music and data. I invited Chris to share with you his favorite resources for discovering, learning, and creating music.
Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP. .
. Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.
Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.
Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”
Per The Rundown:OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.
The details:
OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.
Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.
Which linked to:
AI progress and recommendations — from openai.com AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.
From DSC: I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.
Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective).
Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.
Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.
Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.
Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:
Prompt share – ASMR draw a living animal with oil paint
Prompt: close-up shot of a hand holding a paintbrush, painting on a white sheet of paper placed on a wooden desk. As the brush glides, vivid color paint flows smoothly then suddenly transforms into living [ANIMAL] [COLOR]… pic.twitter.com/rRu6oTwzlP
From DSC: One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.
Nvidia has officially become the first company in history to cross the $5 trillion market cap, cementing its position as the undisputed leader of the AI era. Just three months ago, the chipmaker hit $4 trillion; it’s already added another trillion since.
Nvidia market cap milestones:
Jan 2020: $144 billion
May 2023: $1 trillion
Feb 2024: $2 trillion
Jun 2024: $3 trillion
Jul 2025: $4 trillion
Oct 2025: $5 trillion
The above posting linked to:
Nvidia becomes first public company worth $5 trillion — from techcrunch.com by Ivan Mehta The biggest beneficiary of the ongoing AI boom, Nvidia has become the first public company to pass the $5 trillion market cap milestone.
At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.
Adobe’s new plan is to put an AI co-pilot in every single app.
For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.
Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.
On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.
From DSC: As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.
The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.
My take is this: in all of the anxiety lies a crucial and long-overdue opportunity to deliver better learning experiences. Precisely because Atlas perceives the same context in the same moment as you, it can transform learning into a process aligned with core neuro-scientific principles—including active retrieval, guided attention, adaptive feedback and context-dependent memory formation.
Perhaps in Atlas we have a browser that for the first time isn’t just a portal to information, but one which can become a co-participant in active cognitive engagement—enabling iterative practice, reflective thinking, and real-time scaffolding as you move through challenges and ideas online.
With this in mind, I put together 10 use cases for Atlas for you to try for yourself.
…
6. Retrieval Practice
What: Pulling information from memory drives retention better than re-reading. Why: Practice testing delivers medium-to-large effects (Adesope et al., 2017). Try: Open a document with your previous notes. Ask Atlas for a mixed activity set: “Quiz me on the Krebs cycle—give me a near-miss, high-stretch MCQ, then a fill-in-the-blank, then ask me to explain it to a teen.” Atlas uses its browser memory to generate targeted questions from your actual study materials, supporting spaced, varied retrieval.
From DSC: A quick comment. I appreciate these ideas and approaches from Katarzyna and Rita. I do think that someone is going to want to be sure that the AI models/platforms/tools are given up-to-date information and updated instructions — i.e., any new procedures, steps to take, etc. Perhaps I’m missing the boat here, but an internal AI platform is going to need to have access to up-to-date information and instructions.
Edtech firm Chegg confirmed Monday it is reducing its workforce by 45%, or 388 employees globally, and its chief executive officer is stepping down. Current CEO Nathan Schultz will be replaced effective immediately by executive chairman (and former CEO) Dan Rosensweig. The rise of AI-powered tools has dealt a massive blow to the online homework helper and led to “substantial” declines in revenue and traffic.Company shares have slipped over 10% this year. Chegg recently explored a possible sale, but ultimately decided to keep the company intact.
At the most recent NVIDIA GTC conference, held in Washington, D.C. in October 2025, CEO Jensen Huang announced major developments emphasizing the use of AI to “reindustrialize America”. This included new partnerships, expansion of the Blackwell architecture, and advancements in AI factories for robotics and science. The spring 2024 GTC conference, meanwhile, was headlined by the launch of the Blackwell GPU and significant updates to the Omniverse and robotics platforms.
During the keynote in D.C., Jensen Huang focused on American AI leadership and announced several key initiatives.
Massive Blackwell GPU deployments: The company announced an expansion of its Blackwell GPU architecture, which first launched in March 2024. Reportedly, the company has already shipped 6 million Blackwell chips, with orders for 14 million more by the end of 2025.
AI supercomputers for science: In partnership with the Department of Energy and Oracle, NVIDIA is building new AI supercomputers at Argonne National Laboratory. The largest, named “Solstice,” will deploy 100,000 Blackwell GPUs.
6G infrastructure: NVIDIA announced a partnership with Nokia to develop a U.S.-based, AI-native 6G technology stack.
AI factories for robotics: A new AI Factory Research Center in Virginia will use NVIDIA’s technology for building massive-scale data centers for AI.
Autonomous robotaxis: The company’s self-driving technology, already adopted by several carmakers, will be used by Uber for an autonomous fleet of 100,000 robotaxis starting in 2027.
Nvidia (NVDA) and Uber (UBER) on Tuesday revealed that they’re working to put together what they say will be the world’s largest network of Level 4-ready autonomous cars.
The duo will build out 100,000 vehicles beginning in 2027 using Nvidia’s Drive AGX Hyperion 10 platform and Drive AV software.
Nvidia (NVDA) stock on Tuesday rose 5% to close at a record high after the company announced a slew of product updates, partnerships, and investment initiatives at its GTC event in Washington, D.C., putting it on the doorstep of becoming the first company in history with a market value above $5 trillion.
The AI chip giant is approaching the threshold — settling at a market cap of $4.89 trillion on Tuesday — just months after becoming the first to close above $4 trillion in July.
The Bull and Bear Case For the AI Bubble, Explained — from theneuron.ai by Grant Harvey AI is both a genuine technological revolution and a massive financial bubble, and the defining question is whether miraculous progress can outrun the catastrophic, multi-trillion-dollar cost required to achieve it.
This sets the stage for the defining conflict of our technological era. The narrative has split into two irreconcilable realities. In one, championed by bulls like venture capitalist Marc Andreessen and NVIDIA CEO Jensen Huang, we are at the dawn of “computer industry V2”—a platform shift so profound it will unlock unprecedented productivity and reshape civilization.
In the other, detailed by macro investors like Julien Garran and forensic bears like writer Ed Zitron, AI is a historically massive, circular, debt-fueled mania built on hype, propped up by a handful of insiders, and destined for a collapse that will make past busts look quaint.
This is a multi-layered conflict playing out across public stock markets, the private venture ecosystem, and the fundamental unit economics of the technology itself. To understand the future, and whether it holds a revolution, a ruinous crash, or a complex mixture of both, we must dissect every layer of the argument, from the historical parallels to the hard financial data and the technological critiques that question the very foundation of the boom.
From DSC:
I second what Grant said at the beginning of his analysis:
**The following is shared for educational purposes and is not intended to be financial advice; do your own research!
But I post this because Grant provides both sides of the argument very well.
From DSC: Stephen has some solid reflections and asks some excellent questions in this posting, including:
The question is: how do we optimize an AI to support learning? Will one model be enough? Or do we need different models for different learners in different scenarios?
A More Human University: The Role of AI in Learning — from er.educause.edu by Robert Placido Far from heralding the collapse of higher education, artificial intelligence offers a transformative opportunity to scale meaningful, individualized learning experiences across diverse classrooms.
The narrative surrounding artificial intelligence (AI) in higher education is often grim. We hear dire predictions of an “impending collapse,” fueled by fears of rampant cheating, the erosion of critical thinking, and the obsolescence of the human educator.Footnote1 This dystopian view, however, is a failure of imagination. It mistakes the death rattle of an outdated pedagogical model for the death of learning itself. The truth is far more hopeful: AI is not an asteroid coming for higher education. It is a catalyst that can finally empower us to solve our oldest, most intractable problem: the inability to scale deep, engaged, and truly personalized learning.
Increasing the rate of scientific progress is a core part of Anthropic’s public benefit mission.
We are focused on building the tools to allow researchers to make new discoveries – and eventually, to allow AI models to make these discoveries autonomously.
Until recently, scientists typically used Claude for individual tasks, like writing code for statistical analysis or summarizing papers. Pharmaceutical companies and others in industry also use it for tasks across the rest of their business, like sales, to fund new research. Now, our goal is to make Claude capable of supporting the entire process, from early discovery through to translation and commercialization.
To do this, we’re rolling out several improvements that aim to make Claude a better partner for those who work in the life sciences, including researchers, clinical coordinators, and regulatory affairs managers.
AI as an access tool for neurodiverse and international staff— from timeshighereducation.com by Vanessa Mar-Molinero Used transparently and ethically, GenAI can level the playing field and lower the cognitive load of repetitive tasks for admin staff, student support and teachers
Where AI helps without cutting academic corners When framed as accessibility and quality enhancement, AI can support staff to complete standard tasks with less friction. However, while it supports clarity, consistency and inclusion, generative AI (GenAI) does not replace disciplinary expertise, ethical judgement or the teacher–student relationship. These are ways it can be put to effective use:
The Sleep of Liberal Arts Produces AI — from aiedusimplified.substack.com by Lance Eaton, Ph.D. A keynote at the AI and the Liberal Arts Symposium Conference
This past weekend, I had the honor to be the keynote speaker at a really fantstistic conferece, AI and the Liberal Arts Symposium at Connecticut College. I had shared a bit about this before with my interview with Lori Looney. It was an incredible conference, thoughtfully composed with a lot of things to chew on and think about.
It was also an entirely brand new talk in a slightly different context from many of my other talks and workshops. It was something I had to build entirely from the ground up. It reminded me in some ways of last year’s “What If GenAI Is a Nothingburger”.
It was a real challenge and one I’ve been working on and off for months, trying to figure out the right balance. It’s a work I feel proud of because of the balancing act I try to navigate. So, as always, it’s here for others to read and engage with. And, of course, here is the slide deck as well (with CC license).