Free Music Discovery Tools — from wondertools.substack.com by Jeremy Caplan and Chris Dalla Riva Travel through time and around the world with sound
I love apps like Metronaut and Tomplay, which let me carry a collection of classical (sheet) music on my phone. They also provide piano or orchestral accompaniment for any violin piece I want to play.
Today’s post shares 10 other recommended tools for music lovers from my fellow writer and friend, Chris Dalla Riva, who writes Can’t Get Much Higher, a popular Substack focused on the intersection of music and data. I invited Chris to share with you his favorite resources for discovering, learning, and creating music.
Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP. .
. Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.
Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.
Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”
Per The Rundown:OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.
The details:
OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.
Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.
Which linked to:
AI progress and recommendations — from openai.com AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.
From DSC: I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.
Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective).
Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.
Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.
Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.
Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:
At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.
Adobe’s new plan is to put an AI co-pilot in every single app.
For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.
Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.
On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.
From DSC: As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.
The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.
From DSC: I posted an excerpt of this in another posting, but I wanted to highlight these two powerful, extremely well-done video series for those who might be interested in them.
The House of David is very well done! I enjoyed watching Season 1. LikeThe Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc. Again, both of these series are very well done. .
.
A sampling of others who cover The Chosen includes:
A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.
Veo 3.1 brings richer audio and object-level editing to Google Flow
Sora 2 is here with Cameo self-insertion and collaborative Remix features
Ray3 brings world-first reasoning and HDR to video generation
Kling 2.5 Turbo delivers faster, cheaper, more consistent results
WAN 2.5 revolutionizes talking head creation with perfect audio sync
House of David Season 2 Trailer
HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
Image & Video Prompts
From DSC: By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. LikeThe Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc. Both are very well done. .
[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.
AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.
With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.
ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed
OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.
Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.
The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.
One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.
Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.
The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.
Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.
In short, it’s been a monumental 12 months for AI. Our eighth annual report is the most comprehensive it’s ever been, covering what you need to know about research, industry, politics, and safety – along with our first State of AI Usage Survey of 1,200 practitioners.
The 4 Rs framework Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.
The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.
… Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.
OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.
What Sora 2 can do
Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
Follow intricate multi-shot instructions while maintaining world state across scenes.
Create realistic background soundscapes, dialogue, and sound effects automatically.
Insert YOU into any video after a quick one-time recording (they call this “cameos”).
The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…
Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips
Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.
Also along the lines of creating digital video, see:
What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.
The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.
For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.
Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product.With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.
Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.
Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.
This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”
… Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…
AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…
SKYLAND | AI Short Film Fantasy
Skyland is an AI-powered fantasy short film that takes you on a breathtaking journey with Aeryn Solveth and his loyal flying fish. From soaring above the futuristic city of Cybryne to returning to his homeland of Eryndor, Aeryn’s adventure is… https://t.co/Lz6UUxQvExpic.twitter.com/cYXs9nwTX3
What’s wild is that Dinda used a cocktail of AI tools (Adobe Firefly, MidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.
The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.
Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.
The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”
A new way to get things done with your AI browsing assistant Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.
Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.
We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.
Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between.
Redmond, Wash., August 27—Today, we’re announcing the launch ofCopilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.
Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.
Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.
A new study measuring the use of generative artificial intelligence in different professions has just gone public, and its main message to people working in some fields is harsh. It suggests translators, historians, text writers, sales representatives, and customer service agents might want to consider new careers as pile driver or dredge operators, railroad track layers, hardwood floor sanders, or maids — if, that is, they want to lower the threat of AI apps pushing them out of their current jobs.
From DSC: Unfortunately, this is where the hyperscalers are going to get their ROI from all of the capital expenditures that they are making. Companies are going to use their services in order to reduce headcount at their organizations. CEOs are even beginning to brag about the savings that are realized by the use of AI-based technologies: (or so they claim.)
“As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.”
My first position out of college was being a Customer Service Representative at Baxter Healthcare. It was my most impactful job, as it taught me the value of a customer. From then on, whoever I was trying to assist was my customer — whether they were internal or external to the organization that I was working for. Those kinds of jobs are so important. If they evaporate, what then? How will young people/graduates get their start?
Alex’s take: We’re seeing browsers fundamentally transition from search engines ? answer engines ? action engines. Gone are the days of having to trawl through pages of search results. Commands are the future. They are the direct input to arrive at the outcomes we sought in the first place, such as booking a hotel or ordering food. I’m interested in watching Microsoft’s bet develop as browsers become collaborative (and proactive) assistants.
Amazon just invested in an AI that can create full TV episodes—and it wants you to star in them.
Remember when everyone lost their minds over AI generating a few seconds of video? Well, Amazon just invested in a company called Fable Studio whose system called Showrunner can generates entire 22-minute TV episodes.
… Where does this go from here? Imagine asking AI to rewrite the ending of Game of Thrones, or creating a sitcom where you and your friends are the main characters. This type of tech could create personalized entertainment experiences just like that.
Our take: Without question, we’re moving toward a world where every piece of media can be customized to you personally. Your Netflix could soon generate episodes where you’re the protagonist, with storylines tailored to your interests and sense of humor.
And if this technology scales, the entire entertainment industry could flip upside down. The pitch goes: why watch someone else’s story when you can generate your own?
The End of Work as We Know It — from gizmodo.com by Luc Olinga CEOs call it a revolution in efficiency. The workers powering it call it a “new era in forced labor.” I spoke to the people on the front lines of the AI takeover.
Yet, even in this vision of a more pleasant workplace, the specter of displacement looms large. Miscovich acknowledges that companies are planning for a future where headcount could be “reduced by 40%.” And Clark is even more direct. “A lot of CEOs are saying that, knowing that they’re going to come up in the next six months to a year and start laying people off,” he says. “They’re looking for ways to save money at every single company that exists.”
But we do not have much time. As Clark told me bluntly: “I am hired by CEOs to figure out how to use AI to cut jobs. Not in ten years. Right now.”
Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.
Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.
Today, we’re dropping the world’s first AI-native social feed.
Feed from Character.AI is a dynamic, scrollable content platform that connects users with the latest Characters, Scenes, Streams, and creator-driven videos in one place.
This is a milestone in the evolution of online entertainment.
For the last 10 years, social platforms have been all about passive consumption. The Character.AI Feed breaks that paradigm and turns content into a creative playground. Every post is an invitation to interact, remix, and build on what others have made. Want to rewrite a storyline? Make yourself the main character? Take a Character you just met in someone else’s Scene and pop it into a roast battle or a debate? Now it’s easy. Every story can have a billion endings, and every piece of content can change and evolve with one tap.
Today, we’re introducing powerful enhancements to our Firefly Video Model, including improved motion fidelity and advanced video controls that will accelerate your workflows and provide the precision and style you need to elevate your storytelling. We are also adding new generative AI partner models within Generate Video on Firefly, giving you the power to choose which model works best for your creative needs across image, video and sound.
Plus, our new workflow tools put you in control of your video’s composition and style. You can now layer in custom-generated sound effects right inside the Firefly web app — and start experimenting with AI-powered avatar-led videos.
… Generate Sound Effects (beta)
Sound is a powerful storytelling tool that adds emotion and depth to your videos. Generate Sound Effects (beta) makes it easy to create custom sounds, like a lion’s roar or ambient nature sounds, that enhance your visuals. And like our other Firefly generative AI models, Generate Sound Effects (beta) is commercially safe, so you can create with confidence.
Just type a simple text prompt to generate the sound effect you need. Want even more control? Use your voice to guide the timing and intensity of the sound. Firefly listens to the energy and rhythm of your voice to place sound effects precisely where they belong — matching the action in your video with cinematic timing.
NAMLE 2025 Conference
Join us for the largest professional development conference dedicated to media literacy education in the U.S. on July 11-12, 2025.
From Pre-K to Higher Education, Community Education and Libraries, the conference provides valuable resources, technology, teacher practice and pedagogy, assessments, and core concepts of media literacy education.
According to a new report from Enkrypt AI, multimodal models have opened the door to sneakier attacks (like Ocean’s Eleven, but with fewer suits and more prompt injections).
Naturally, Enkrypt decided to run a few experiments… and things escalated quickly.
They tested two of Mistral’s newest models—Pixtral-Large and Pixtral-12B, built to handle words and visuals.
What they found? Yikes:
The models are 40x more likely to generate dangerous chemical / biological / nuclear info.
And 60x more likely to produce child sexual exploitation material compared to top models like OpenAI’s GPT-4o or Anthropic’s Claude 3.7 Sonnet.
.Get the 2025 Student Guide to Artificial Intelligence — from studentguidetoai.org This guide is made available under a Creative Commons license by Elon University and the American Association of Colleges and Universities (AAC&U). .
Agentic AI is taking these already huge strides even further. Rather than simply asking a question and receiving an answer, an AI agent can assess your current level of understanding and tailor a reply to help you learn. They can also help you come up with a timetable and personalized lesson plan to make you feel as though you have a one-on-one instructor walking you through the process. If your goal is to learn to speak a new language, for example, an agent might map out a plan starting with basic vocabulary and pronunciation exercises, then progress to simple conversations, grammar rules and finally, real-world listening and speaking practice.
…
For instance, if you’re an entrepreneur looking to sharpen your leadership skills, an AI agent might suggest a mix of foundational books, insightful TED Talks and case studies on high-performing executives. If you’re aiming to master data analysis, it might point you toward hands-on coding exercises, interactive tutorials and real-world datasets to practice with.
The beauty of AI-driven learning is that it’s adaptive. As you gain proficiency, your AI coach can shift its recommendations, challenge you with new concepts and even simulate real-world scenarios to deepen your understanding.
Ironically, the very technology feared by workers can also be leveraged to help them. Rather than requiring expensive external training programs or lengthy in-person workshops, AI agents can deliver personalized, on-demand learning paths tailored to each employee’s role, skill level, and career aspirations. Given that 68% of employees find today’s workplace training to be overly “one-size-fits-all,” an AI-driven approach will not only cut costs and save time but will be more effective.
This is one reason why I don’t see AI-embedded classrooms and AI-free classrooms as opposite poles. The bone of contention, here, is not whether we can cultivate AI-free moments in the classroom, but for how long those moments are actually sustainable.
Can we sustain those AI-free moments for an hour? A class session? Longer?
…
Here’s what I think will happen. As AI becomes embedded in society at large, the sustainability of imposed AI-free learning spaces will get tested. Hard. I think it’ll become more and more difficult (though maybe not impossible) to impose AI-free learning spaces on students.
However, consensual and hybrid AI-free learning spaces will continue to have a lot of value. I can imagine classes where students opt into an AI-free space. Or they’ll even create and maintain those spaces.
Duolingo’s AI Revolution — from drphilippahardman.substack.com by Dr. Philippa Hardman What 148 AI-Generated Courses Tell Us About the Future of Instructional Design & Human Learning
Last week, Duolingo announced an unprecedented expansion: 148 new language courses created using generative AI, effectively doubling their content library in just one year. This represents a seismic shift in how learning content is created — a process that previously took the company 12 years for their first 100 courses.
As CEO Luis von Ahn stated in the announcement, “This is a great example of how generative AI can directly benefit our learners… allowing us to scale at unprecedented speed and quality.”
In this week’s blog, I’ll dissect exactly how Duolingo has reimagined instructional design through AI, what this means for the learner experience, and most importantly, what it tells us about the future of our profession.
Medical education is experiencing a quiet revolution—one that’s not taking place in lecture theatres or textbooks, but with headsets and holograms. At the heart of this revolution are Mixed Reality (MR) AI Agents, a new generation of devices that combine the immersive depth of mixed reality with the flexibility of artificial intelligence. These technologies are not mere flashy gadgets; they’re revolutionising the way medical students interact with complicated content, rehearse clinical skills, and prepare for real-world situations. By combining digital simulations with the physical world, MR AI Agents are redefining what it means to learn medicine in the 21st century.
4 Reasons To Use Claude AI to Teach — from techlearning.com by Erik Ofgang Features that make Claude AI appealing to educators include a focus on privacy and conversational style.
After experimenting using Claude AI on various teaching exercises, from generating quizzes to tutoring and offering writing suggestions, I found that it’s not perfect, but I think it behaves favorably compared to other AI tools in general, with an easy-to-use interface and some unique features that make it particularly suited for use in education.
powerless to fight the technology that we pioneered nostalgic for a world that moved on without us after decades of paying our dues for a payday that never came …so yeah not exactly fine.
The Gen X Career Meltdown — from nytimes.com by Steeven Kurutz (DSC: This is a gifted article for you) Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.
If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.
“I am having conversations every day with people whose careers are sort of over,” said Chris Wilcha, a 53-year-old film and TV director in Los Angeles.
Talk with people in their late 40s and 50s who once imagined they would be able to achieve great heights — or at least a solid career while flexing their creative muscles — and you are likely to hear about the photographer whose work dried up, the designer who can’t get hired or the magazine journalist who isn’t doing much of anything.
In the wake of the influencers comes another threat, artificial intelligence, which seems likely to replace many of the remaining Gen X copywriters, photographers and designers. By 2030, ad agencies in the United States will lose 32,000 jobs, or 7.5 percent of the industry’s work force, to the technology, according to the research firm Forrester.
From DSC: This article reminds me of how tough it is to navigate change in our lives. For me, it was often due to the fact that I was working with technologies. Being a technologist can be difficult, especially as one gets older and faces age discrimination in a variety of industries. You need to pick the right technologies and the directions that will last (for me it was email, videoconferencing, the Internet, online-based education/training, discovering/implementing instructional technologies, and becoming a futurist).
For you younger folks out there — especially students within K-16 — aim to develop a perspective and a skillset that is all about adapting to change. You will likely need to reinvent yourself and/or pick up new skills over your working years. You are most assuredly required to be a lifelong learner now. That’s why I have been pushing for school systems to be more concerned with providing more choice and control to students — so that students actually like school and enjoy learning about new things.