Free Music Discovery Tools — from wondertools.substack.com by Jeremy Caplan and Chris Dalla Riva
Travel through time and around the world with sound

I love apps like Metronaut and Tomplay, which let me carry a collection of classical (sheet) music on my phone. They also provide piano or orchestral accompaniment for any violin piece I want to play.

Today’s post shares 10 other recommended tools for music lovers from my fellow writer and friend, Chris Dalla Riva, who writes Can’t Get Much Higher, a popular Substack focused on the intersection of music and data. I invited Chris to share with you his favorite resources for discovering, learning, and creating music.

Sections include:

  • Learn about Music
  • Discover New Music
  • Learn an Instrument
  • Tools for Artists
 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 

Adobe Reinvents its Entire Creative Suite with AI Co-Pilots, Custom Models, and a New Open Platform — from theneuron.ai by Grant Harvey
Adobe just put an AI co-pilot in every one of its apps, letting you chat with Photoshop, train models on your own style, and generate entire videos with a single subscription that now includes top models from Google, Runway, and Pika.

Adobe came to play, y’all.

At Adobe MAX 2025 in Los Angeles, the company dropped an entire creative AI ecosystem that touches every single part of the creative workflow. In our opinion, all these new features aren’t about replacing creators; it’s about empowering them with superpowers they can actually control.

Adobe’s new plan is to put an AI co-pilot in every single app.

  • For professionals, the game-changer is Firefly Custom Models. Start training one now to create a consistent, on-brand look for all your assets.
  • For everyday creators, the AI Assistants in Photoshop and Express will drastically speed up your workflow.
  • The best place to start is the Photoshop AI Assistant (currently in private beta), which offers a powerful glimpse into the future of creative software—a future where you’re less of a button-pusher and more of a creative director.

Adobe MAX Day 2: The Storyteller Is Still King, But AI Is Their New Superpower — from theneuron.ai by Grant Harvey
Adobe’s Day 2 keynote showcased a suite of AI-powered creative tools designed to accelerate workflows, but the real message from creators like Mark Rober and James Gunn was clear: technology serves the story, not the other way around.

On the second day of its annual MAX conference, Adobe drove home a message that has been echoing through the creative industry for the past year: AI is not a replacement, but a partner. The keynote stage featured a powerful trio of modern storytellers—YouTube creator Brandon Baum, science educator and viral video wizard Mark Rober, and Hollywood director James Gunn—who each offered a unique perspective on a shared theme: technology is a powerful tool, but human instinct, hard work, and the timeless art of storytelling remain paramount.

From DSC:
As Grant mentioned, the demos dealt with ideation, image generation, video generation, audio generation, and editing.


Adobe Max 2025: all the latest creative tools and AI announcements — from theverge.com by Jess Weatherbed

The creative software giant is launching new generative AI tools that make digital voiceovers and custom soundtracks for videos, and adding AI assistants to Express and Photoshop for web that edit entire projects using descriptive prompts. And that’s just the start, because Adobe is planning to eventually bring AI assistants to all of its design apps.


Also see Adobe Delivers New AI Innovations, Assistants and Models Across Creative Cloud to Empower Creative Professionals plus other items from the News section from Adobe


 

 

“A new L&D operating system for the AI Era?” [Hardman] + other items re: AI in our learning ecosystems

From 70/20/10 to 90/10 — from drphilippahardman.substack.com by Dr Philippa Hardman
A new L&D operating system for the AI Era?

This week I want to share a hypothesis I’m increasingly convinced of: that we are entering an age of the 90/10 model of L&D.

90/10 is a model where roughly 90% of “training” is delivered by AI coaches as daily performance support, and 10% of training is dedicated to developing complex and critical skills via high-touch, human-led learning experiences.

Proponents of 90/10 argue that the model isn’t about learning less, but about learning smarter by defining all jobs to be done as one of the following:

  • Delegate (the dead skills): Tasks that can be offloaded to AI.
  • Co-Create (the 90%): Tasks which well-defined AI agents can augment and help humans to perform optimally.
  • Facilitate (the 10%): Tasks which require high-touch, human-led learning to develop.

So if AI at work is now both real and material, the natural question for L&D is: how do we design for it? The short answer is to stop treating learning as an event and start treating it as a system.



My daughter’s generation expects to learn with AI, not pretend it doesn’t exist, because they know employers expect AI fluency and because AI will be ever-present in their adult lives.

— Jenny Maxell

The above quote was taken from this posting.


Unlocking Young Minds: How Gamified AI Learning Tools Inspire Fun, Personalized, and Powerful Education for Children in 2025 — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

Highlight

  • Gamified AI Learning Tools personalize education by adapting the difficulty and content to each child’s pace, fostering confidence and mastery.
  • Engaging & Fun: Gamified elements like quests, badges, and stories keep children motivated and enthusiastic.
  • Safe & Inclusive: Attention to equity, privacy, and cultural context ensures responsible and accessible learning.

How to test GenAI’s impact on learning — from timeshighereducation.com by Thibault Schrepel
Rather than speculate on GenAI’s promise or peril, Thibault Schrepel suggests simple teaching experiments to uncover its actual effects

Generative AI in higher education is a source of both fear and hype. Some predict the end of memory, others a revolution in personalised learning. My two-year classroom experiment points to a more modest reality: Artificial intelligence (AI) changes some skills, leaves others untouched and forces us to rethink the balance.

This indicates that the way forward is to test, not speculate. My results may not match yours, and that is precisely the point. Here are simple activities any teacher can use to see what AI really does in their own classroom.

4. Turn AI into a Socratic partner
Instead of being the sole interrogator, let AI play the role of tutor, client or judge. Have students use AI to question them, simulate cross-examination or push back on weak arguments. New “study modes” now built into several foundation models make this kind of tutoring easy to set up. Professors with more technical skills can go further, design their own GPTs or fine-tuned models trained on course content and let students interact directly with them. The point is the practice it creates. Students learn that questioning a machine is part of learning to think like a professional.


Assessment tasks that support human skills — from timeshighereducation.com by Amir Ghapanchi and Afrooz Purarjomandlangrudi
Assignments that focus on exploration, analysis and authenticity offer a road map for university assessment that incorporates AI while retaining its rigour and human elements

Rethinking traditional formats

1. From essay to exploration 
When ChatGPT can generate competent academic essays in seconds, the traditional format’s dominance looks less secure as an assessment task. The future lies in moving from essays as knowledge reproduction to assessments that emphasise exploration and curation. Instead of asking students to write about a topic, challenge them to use artificial intelligence to explore multiple perspectives, compare outputs and critically evaluate what emerges.

Example: A management student asks an AI tool to generate several risk plans, then critiques the AI’s assumptions and identifies missing risks.


What your students are thinking about artificial intelligence — from timeshighereducation.com by Florencia Moore and Agostina Arbia
GenAI has been quickly adopted by students, but the consequences of using it as a shortcut could be grave. A study into how students think about and use GenAI offers insights into how teaching might adapt

However, when asked how AI negatively impacts their academic development, 29 per cent noted a “weakening or deterioration of intellectual abilities due to AI overuse”. The main concern cited was the loss of “mental exercise” and soft skills such as writing, creativity and reasoning.

The boundary between the human and the artificial does not seem so easy to draw, but as the poet Antonio Machado once said: “Traveller, there is no path; the path is made by walking.”


Jelly Beans for Grapes: How AI Can Erode Students’ Creativity — from edsurge.com by Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”


Deloitte will make Claude available to 470,000 people across its global network — from anthropic.com

As part of the collaboration, Deloitte will establish a Claude Center of Excellence with trained specialists who will develop implementation frameworks, share leading practices across deployments, and provide ongoing technical support to create the systems needed to move AI pilots to production at scale. The collaboration represents Anthropic’s largest enterprise AI deployment to date, available to more than 470,000 Deloitte people.

Deloitte and Anthropic are co-creating a formal certification program to train and certify 15,000 of its professionals on Claude. These practitioners will help support Claude implementations across Deloitte’s network and Deloitte’s internal AI transformation efforts.


How AI Agents are finally delivering on the promise of Everboarding: driving retention when it counts most — from premierconstructionnews.com

Everboarding flips this model. Rather than ending after orientation, everboarding provides ongoing, role-specific training and support throughout the employee journey. It adapts to evolving responsibilities, reinforces standards, and helps workers grow into new roles. For high-turnover, high-pressure environments like retail, it’s a practical solution to a persistent challenge.

AI agents will be instrumental in the success of everboarding initiatives; they can provide a much more tailored training and development process for each individual employee, keeping track of which training modules may need to be completed, or where staff members need or want to develop further. This personalisation helps staff to feel not only more satisfied with their current role, but also guides them on the right path to progress in their individual careers.

Digital frontline apps are also ideal for everboarding. They offer bite-sized training that staff can complete anytime, whether during quiet moments on shift or in real time on the job, all accessible from their mobile devices.


TeachLM: insights from a new LLM fine-tuned for teaching & learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six key takeaways, including what the research tells us about how well AI performs as an instructional designer

As I and many others have pointed out in recent months, LLMs are great assistants but very ineffective teachers. Despite the rise of “educational LLMs” with specialised modes (e.g. Anthropic’s Learning Mode, OpenAI’s Study Mode, Google’s Guided Learning) AI typically eliminates the productive struggle, open exploration and natural dialogue that are fundamental to learning.

This week, Polygence, in collaboration with Stanford University researcher Prof Dora Demszky. published a first-of-its-kind research on a new model — TeachLM — built to address this gap.

In this week’s blog post, I deep dive what the research found and share the six key findings — including reflections on how well TeachLM performs on instructional design.


The Dangers of using AI to Grade — from marcwatkins.substack.com by Marc Watkins
Nobody Learns, Nobody Gains

AI as an assessment tool represents an existential threat to education because no matter how you try and establish guardrails or best practices around how it is employed, using the technology in place of an educator ultimately cedes human judgment to a machine-based process. It also devalues the entire enterprise of education and creates a situation where the only way universities can add value to education is by further eliminating costly human labor.

For me, the purpose of higher education is about human development, critical thinking, and the transformative experience of having your ideas taken seriously by another human being. That’s not something we should be in a rush to outsource to a machine.

 

Sam Altman kicks off DevDay 2025 with a keynote to explore ideas that will challenge how you think about building. Join us for announcements, live demos, and a vision of how developers are reshaping the future with AI.

Commentary from The Rundown AI:

Why it matters: OpenAI is turning ChatGPT into a do-it-all platform that might eventually act like a browser in itself, with users simply calling on the website/app they need and interacting directly within a conversation instead of navigating manually. The AgentKit will also compete and disrupt competitors like Zapier, n8n, Lindy, and others.


AMD and OpenAI announce strategic partnership to deploy 6 gigawatts of AMD GPUs — from openai.com

  • OpenAI to deploy 6 gigawatts of AMD GPUs based on a multi-year, multi-generation agreement
  • Initial 1 gigawatt OpenAI deployment of AMD Instinct™ MI450 Series GPUs starting in 2H 2026

Thoughts from OpenAI DevDay — from bensbites.com by Ben Tossell
When everyone becomes a developer

The event itself was phenomenal, great organisation. In terms of releases, there were two big themes:

  1. Add your apps to ChatGPT
  2. Add ChatGPT to your apps

Everything OpenAI announced at DevDay 2025 — from theaivalley.com by Barsee
PLUS: OpenAI has signed $1T in compute deals

Today’s climb through the Valley reveals:

  • Everything OpenAI announced at DevDay 2025
  • OpenAI has signed $1T in compute deals
  • Plus trending AI tools, posts, and resources

Also relevant/see:



 

AI agents: Where are they now? From proof of concept to success stories — from hrexecutive.com by Jill Barth

The 4 Rs framework
Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:

  1. Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
  2. Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
  3. Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
  4. Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”

Synthetic Reality Unleashed: AI’s powerful Impact on the Future of Journalism — from techgenyz.com by Sreyashi Bhattacharya

Table of Contents

  • Highlights
  • What is “synthetic news”?
  • Examples in action
  • Why are newsrooms experimenting with synthetic tools
  • Challenges and Risks
  • What does the research say
    • Transparency seems to matter. —What is next: trends & future
  • Conclusion

The latest video generation tool from OpenAI –> Sora 2

Sora 2 is here — from openai.com

Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.

And a video on this out at YouTube:

Per The Rundown AI:

The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.

Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.


OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey
OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.

What Sora 2 can do

  • Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
  • Follow intricate multi-shot instructions while maintaining world state across scenes.
  • Create realistic background soundscapes, dialogue, and sound effects automatically.
  • Insert YOU into any video after a quick one-time recording (they call this “cameos”).

The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…


Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg
OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips

Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.


Also along the lines of creating digital video, see:

What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.

The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.

For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.

Bilawal Sidhu


OpenAI DevDay 2025: everything you need to know — from getsuperintel.com by Kim “Chubby” Isenberg
Apps Inside ChatGPT, a New Era Unfolds

Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product. With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.

 
 

GRCC students to use AI to help businesses solve ‘real world’ challenges in new course — from www-mlive-com.cdn.ampproject.org by Brian McVicar; via Patrick Bailey on LinkedIn

GRAND RAPIDS, MI — A new course at Grand Rapids Community College aims to help students learn about artificial intelligence by using the technology to solve real-world business problems.

In a release, the college said its grant application was supported by 20 local businesses, including Gentex, TwistThink and the Grand Rapids Public Museum. The businesses have pledged to work with students who will use business data to develop an AI project such as a chatbot that interacts with customers, or a program that automates social media posts or summarizes customer data.

“This rapidly emerging technology can transform the way businesses process data and information,” Kristi Haik, dean of GRCC’s School of Science, Technology, Engineering and Mathematics, said in a statement. “We want to help our local business partners understand and apply the technology. We also want to create real experiences for our students so they enter the workforce with demonstrated competence in AI applications.”

As Patrick Bailey said on LinkedIn about this article:

Nice to see a pedagogy that’s setting a forward movement rather than focusing on what could go wrong with AI in a curriculum.


Forecast for Learning and Earning in 2025-2026 report — from pages.asugsvsummit.com by Jennifer Lee and Claire Zau

In this look ahead at the future of learning and work, we aim to define:

  • Major thematic observations
  • What makes this moment an inflection point
  • Key predictions (and their precedent)
  • Short- and long-term projected impacts


The LMS at 30: From Course Management to Learning Management (At Last) — from onedtech.philhillaa.com; a guest post from Matthew Pittinsky, Ph.D.

As a 30 year observer and participant, it seems to me that previous technology platform shifts like SaaS and mobile did not fundamentally change the LMS. AI is different. We’re standing at the precipice of LMS 2.0, where the branding change from Course Management System to Learning Management System will finally live up to its name. Unlike SaaS or mobile, AI represents a technology platform shift that will transform the way participants interact with learning systems – and with it, the nature of the LMS itself.

Given the transformational potential of AI, it is useful to set the context and think about how we got here, especially on this 30th anniversary of the LMS.

LMS at 30 Part 2: Learning Management in the AI Era — from onedtech.philhillaa.com; a guest post from Matthew Pittinsky, Ph.D.

Where AI is disruptive is in its ability to introduce a whole new set of capabilities that are best described as personalized learning services. AI offers a new value proposition to the LMS, roughly the set of capabilities currently being developed in the AI Tutor / agentic TA segment. These new capabilities are so valuable given their impact on learning that I predict they will become the services with greatest engagement within a school or university’s “enterprise” instructional platform.

In this way, by LMS paradigm shift, I specifically mean a shift from buyers valuing the product on its course-centric and course management capabilities, to valuing it on its learner-centric and personalized learning capabilities.


AI and the future of education: disruptions, dilemmas and directions — from unesdoc.unesco.org

This anthology reveals how the integration of AI in education poses profound philosophical, pedagogical, ethical and political questions. As this global AI ecosystem evolves and becomes increasingly ubiquitous, UNESCO and its partners have a shared responsibility to lead the global discourse towards an equity- and justice-centred agenda. The volume highlights three areas in which UNESCO will continue to convene and lead a global commons for dialog and action particularly in areas on AI futures, policy and practice innovation, and experimentation.

  1. As guardian of ethical, equitable human-centred AI in education.
  2. As thought leader in reimagining curriculum and pedagogy
  3. As a platform for engaging pluralistic and contested dialogues

AI, copyright and the classroom: what higher education needs to know — from timeshighereducation.com by Cayce Myers
As artificial intelligence reshapes teaching and research, one legal principle remains at the heart of our work: copyright. Understanding its implications isn’t just about compliance – it’s about protecting academic integrity, intellectual property and the future of knowledge creation. Cayce Myers explains


The School Year We Finally Notice “The Change” — from americanstogether.substack.com by Jason Palmer

Why It Matters
A decade from now, we won’t say “AI changed schools.” We’ll say: this was the year schools began to change what it means to be human, augmented by AI.

This transformation isn’t about efficiency alone. It’s about dignity, creativity, and discovery, and connecting education more directly to human flourishing. The industrial age gave us schools to produce cookie-cutter workers. The digital age gave us knowledge anywhere, anytime. The AI age—beginning now—gives us back what matters most: the chance for every learner to become infinitely capable.

This fall may look like any other—bells ringing, rows of desks—but beneath the surface, education has begun its greatest transformation since the one-room schoolhouse.


How should universities teach leadership now that teams include humans and autonomous AI agents? — from timeshighereducation.com by Alex Zarifis
Trust and leadership style are emerging as key aspects of teambuilding in the age of AI. Here are ways to integrate these considerations with technology in teaching

Transactional and transformational leaderships’ combined impact on AI and trust
Given the volatile times we live in, a leader may find themselves in a situation where they know how they will use AI, but they are not entirely clear on the goals and journey. In a teaching context, students can be given scenarios where they must lead a team, including autonomous AI agents, to achieve goals. They can then analyse the situations and decide what leadership styles to apply and how to build trust in their human team members. Educators can illustrate this decision-making process using a table (see above).

They may need to combine transactional leadership with transformational leadership, for example. Transactional leadership focuses on planning, communicating tasks clearly and an exchange of value. This works well with both humans and automated AI agents.

 

Introducing the 2025 State of the L&D Industry Report — from community.elearningacademy.io

What’s changing is not the foundation—it’s the ecosystem. Teams are looking to create more flexible, scalable, and diverse learning experiences that meet people where they are.

What Did We Explore?
Everyone seems to have a take on what’s happening in L&D these days. From bold claims about six-figure roles to debates over whether portfolios or degrees matter more, everyone seems to have a take. So, we wanted to get to the heart of it by exploring five of the biggest, most debated areas shaping our work today:

  • Salaries: Are compensation trends really keeping pace with the value we deliver?
  • Hiring: What skills are managers actually looking for—and are those ATS horror stories true?
  • Portfolios: Are portfolios helping candidates stand out, and what are hiring managers actually looking for?
  • Tools & Modalities: What types of training are teams building, and what tools are they using to build it?
  • Artificial Intelligence: Who’s using it, how, and what concerns still exist?

These five areas are shaping the future of instructional design—not just for job seekers, but for team leaders, hiring managers, and the entire ecosystem of L&D professionals.

The takeaway? A portfolio is more than a collection of projects—it’s a storytelling tool. The ones that stand out highlight process, decision-making, and results—not just pretty screens.

 

 

CrashCourse on YouTube — via Matt Tower’s The EdSheet Vol. 18

Description:
At Crash Course, we believe that high-quality educational videos should be available to everyone for free! Subscribe for weekly videos from our current courses! The Crash Course team has produced more than 50 courses on a wide variety of subjects, ranging from the humanities to sciences and so much more! We also recently teamed up with Arizona State University to bring you more courses on the Study Hall channel.

And as Matt stated:


From DSC:
I wasn’t familiar with this “channel” — but I like their mission to help people learn…very inexpensively! Along these lines,  I, too, pray for the world’s learning ecosystems — especially those belonging to children.


 

The Top 100 [Gen AI] Consumer Apps 5th edition — from a16z.com


And in an interesting move by Microsoft and Samsung:

A smarter way to talk to your TV: Microsoft Copilot launches on Samsung TVs and monitors — from microsoft.com

Voice-powered AI meets a visual companion for entertainment, everyday help, and everything in between. 

Redmond, Wash., August 27—Today, we’re announcing the launch of Copilot on select Samsung TVs and monitors, transforming the biggest screen in your home into your most personal and helpful companion—and it’s free to use.

Copilot makes your TV easier and more fun to use with its voice-powered interface, friendly on-screen character, and simple visual cards. Now you can quickly find what you’re looking for and discover new favorites right from your living room.

Because it lives on the biggest screen in the home, Copilot is a social experience—something you can use together with family and friends to spark conversations, help groups decide what to watch, and turn the TV into a shared space for curiosity and connection.

 

Firefly adds new video capabilities, industry leading AI models, and Generate Sound Effects feature — from blog.adobe.com

Today, we’re introducing powerful enhancements to our Firefly Video Model, including improved motion fidelity and advanced video controls that will accelerate your workflows and provide the precision and style you need to elevate your storytelling. We are also adding new generative AI partner models within Generate Video on Firefly, giving you the power to choose which model works best for your creative needs across image, video and sound.

Plus, our new workflow tools put you in control of your video’s composition and style. You can now layer in custom-generated sound effects right inside the Firefly web app — and start experimenting with AI-powered avatar-led videos.

Generate Sound Effects (beta)
Sound is a powerful storytelling tool that adds emotion and depth to your videos. Generate Sound Effects (beta) makes it easy to create custom sounds, like a lion’s roar or ambient nature sounds, that enhance your visuals. And like our other Firefly generative AI models, Generate Sound Effects (beta) is commercially safe, so you can create with confidence.

Just type a simple text prompt to generate the sound effect you need. Want even more control? Use your voice to guide the timing and intensity of the sound. Firefly listens to the energy and rhythm of your voice to place sound effects precisely where they belong — matching the action in your video with cinematic timing.

 

 

2025 EDUCAUSE Horizon Report | Teaching and Learning Edition — from library.educause.edu

Higher education is in a period of massive transformation and uncertainty. Not only are current events impacting how institutions operate, but technological advancement—particularly in AI and virtual reality—are reshaping how students engage with content, how cognition is understood, and how learning itself is documented and valued.

Our newly released 2025 EDUCAUSE Horizon Report | Teaching and Learning Edition captures the spirit of this transformation and how you can respond with confidence through the lens of emerging trends, key technologies and practices, and scenario-based foresight.

#teachingandlearning #highereducation #learningecosystems #learning #futurism #foresight #trends #emergingtechnologies #AI #VR #gamechangingenvironment #colleges #universities #communitycolleges #faculty #staff #IT

 




Students and folks looking for work may want to check out:

Also relevant/see:


 
© 2025 | Daniel Christian