Kling 3.0 just launched. The best video model yet. — from heatherbcooper.substack.com by Heather Cooper
& workflows from Imagine Art 1.5 pro, Pixverse Real-Time Video & Genspark

In today’s edition:

  • Kling 3.0: Everyone a Director
  • Character consistency, native audio, 15-second generations & first results
  • Image & Video Prompts
  • Imagine Art 1.5 Pro, Genspark AI Workspace 2.0 & PixVerse Real-Time Video Workflows

Kling 3.0: Everyone a Director
Kling just dropped version 3.0, and it’s a legitimate leap forward for AI video production (Kling is the GOAT). After spending early access time testing the new capabilities, I can confirm this is the most significant update to video generation tools I’ve seen in months.

Key highlights:

  • Character & Element Consistency:
  • Flexible Video Production:
  • Native Audio with Dialogue & Singing:
  • Enhanced Image Generation:
  • Professional Output:
 

Which AI Video Tool Is Most Powerful for L&D Teams? — from by Dr. Philippa Hardman
Evaluating four popular AI video generation platforms through a learning-science lens

Happy new year! One of the biggest L&D stories of 2025 was the rise to fame among L&D teams of AI video generator tools. As we head into 2026, platforms like Colossyan, Synthesia, HeyGen, and NotebookLM’s video creation feature are firmly embedded in most L&D tech stacks. These tools promise rapid production and multi-language output at significantly reduced costs —and they deliver on a lot of that.

But something has been playing on my mind: we rarely evaluate these tools on what matters most for learning design—whether they enable us to build instructional content that actually enables learning.

So, I spent some time over the holiday digging into this question: do the AI video tools we use most in L&D create content that supports substantive learning?

To answer it, I took two decades of learning science research and translated it into a scoring rubric. Then I scored the four most popular AI video generation platforms among L&D professionals against the rubric.
.

 


For an AI-based tool or two — as they regard higher ed — see:

5 new tools worth trying — from wondertools.substack.com by Jeremy Kaplan

YouTube to NotebookLM: Import a Whole Playlist or Channel in One Click
YouTube to NotebookLM is a remarkably useful new Chrome extension that lets you bulk-add any YouTube playlists, channels, or search results into NotebookLM. for AI-powered analysis.

What to try

  • Find or create YouTube playlists on topics of interest. Then use this extension to ingest those playlists into NotebookLM. The videos are automatically indexed, and within minutes you can create reports, slides, and infographics to enhance your learning.
  • Summarize a playlist or channel with an audio or video overview. Or create quizzes, flash cards, data tables, or mind maps to explore a batch of YouTube videos. Or have a chat in NotebookLM with your favorite video channel. Check my recent post for some YouTube channels to try.
 

What AI-Generated Voice Technology Means For Creators And Brands — from bitrebels.com by Ryan Mitchell

Voice has become one of the most influential elements in how digital content is experienced. From podcasts and videos to apps, ads, and interactive platforms, spoken audio shapes how messages are understood and remembered. In recent years, the rise of the ai voice generator has changed how creators and brands approach audio production, lowering barriers while expanding creative possibilities.

Rather than relying exclusively on traditional voice recording, many teams now use AI-generated voices as part of their content and brand strategies. This shift is not simply about efficiency; it reflects broader changes in how digital experiences are produced, scaled, and personalised.

The Future Role Of AI-Generated Voice
As AI voice technology continues to improve, its role in creative and brand workflows will likely expand. Future developments may include more adaptive voices that respond to context, audience behaviour, or emotional cues in real time. Rather than replacing traditional voice work, AI-generated voice is becoming another option in a broader creative toolkit, one that offers speed, flexibility, and accessibility.

 

6 Ed Tech Tools to Try in 2026 — from cultofpedagogy.com by Jennifer Gonzalez

It’s that time again ~ the annual round-up of tech tools we think are worth a look this year. This year I really feel like there’s something for everyone: history teachers, math and science teachers, people who run makerspaces, teachers interested in music or podcasting, writing teachers, special ed teachers, and anyone whose course content could be made clearer through graphic organizers.


Also somewhat relevant here, see:


 

 

Making the case for arts and humanities — from timeshighereducation.com by campus contributors, Eliza Compton
The arts and humanities are often dismissed as an unaffordable luxury, when these disciplines underpin vital human skills such as critical thinking, creativity and communication. This collection explores many ways in which arts and humanities can be harnessed for the benefit of all – students, universities and wider society

Yet, amid the threat of AI-driven automation in the workforce, fierce competition for entry-level jobs, and complex global problems such as climate change, the skills that humanities disciplines are built upon are vital. These skills – such as critical thinking, communication and creativity – are also key to universities’ capacity to share knowledge with industry, policymakers and the public. When it comes to understanding how high-tech solutions can best be applied in the real world, often the barriers are not technical but human, as low vaccine take-ups show.

These human skills are not unique to disciplines such as history, philosophy, literature, linguistics, performance and visual arts, of course. The need for deep thinking and analysis across all areas of academic enquiry is embedded in interdisciplinarity and STEAM initiatives, which integrate science, technology, mathematics and engineering with arts and humanities.

At their core, the arts and humanities interrogate what makes us human and how we understand and communicate with the world. In this collection, contributors from around the globe articulate the value that these disciplines bring to students, industry, government and society, when taught and designed effectively. It also considers how arts-based research can drive discovery, the role of interdisciplinarity in teaching and research, and how humanities-led expertise supports sustainability and inclusion.

 

 
 

12 Photographer Portfolios Packed With Ideas and Inspiration — from booooooom.com



Speaking of photography, also see:

Photographer Spotlight: Pelle Cass — from booooooom.com

 

 

Field Kallop Meditates on Universal Patterns Through Bold Chromatic Compositions — from thisiscolossal.com by Field Kallop and Grace Ebert
.

 

Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six evidence-based use cases to try in Google’s latest image-generating AI tool

While it’s true that Nano Banana generates better infographics than other AI models, the conversation has so far massively under-sold what’s actually different and valuable about this tool for those of us who design learning experiences.

What this means for our workflow:

Instead of the traditional “commission ? wait ? tweak ? approve ? repeat” cycle, Nano Banana enables an iterative, rapid-cycle design process where you can:

  • Sketch an idea and see it refined in minutes.
  • Test multiple visual metaphors for the same concept without re-briefing a designer.
  • Build 10-image storyboards with perfect consistency by specifying the constraints once, not manually editing each frame.
  • Implement evidence-based strategies (contrasting cases, worked examples, observational learning) that are usually too labour-intensive to produce at scale.

This shift—from “image generation as decoration” to “image generation as instructional scaffolding”—is what makes Nano Banana uniquely useful for the 10 evidence-based strategies below.

 


 


 



“Whither Rivers Flow” by Photographer Ximeng Tu — from booooooom.com by Ximeng Tu


Zaha Hadid Architects completes waterfront stadium and sports centre in Guangzhou — from dezeen.com by Amy Peacock

 

Simon Laveuve’s Scaled-Down Tableaux Reveal Post-Apocalyptic Lifestyles — from thisiscolossal.com by Simon Laveuve and Kate Mothes


Bringing High School Students and Kindergartners Together to Make Art — from edutopia.org by Cory Desmond
A look at how teachers can have students collaborate across grades on an art project that promotes creativity and teamwork.

What happens when high school students and kindergartners collaborate? Art. Innovation. Growth. And so much more.

Inspired by illustrator Mica Angela Hendricks’s collaborations with her 4-year-old daughter—in which Hendricks would begin by drawing a portrait and then have her daughter add to it—I formalized the concept into an inter-grade art lesson. It’s a replicable, three-stage project based on vertical collaboration. This model bridges the creative and social gap between students, weaving together technical skill and imagination through methods based in social and emotional learning (SEL).

It operates by passing a structured project back and forth, compelling older students to engage with empathy, relationship maintenance, and responsible decision-making. Simultaneously, it empowers younger students, giving them significant creative autonomy through their own responsible choices. By breaking down the separation between age groups, cross-grade collaborations cultivate essential skills in ways that isolated classrooms typically can’t.

In this article, I’ll provide a flexible framework for vertical collaboration—a blueprint that teachers can adapt for their own cross-grade collaborations.

 

Free Music Discovery Tools — from wondertools.substack.com by Jeremy Caplan and Chris Dalla Riva
Travel through time and around the world with sound

I love apps like Metronaut and Tomplay, which let me carry a collection of classical (sheet) music on my phone. They also provide piano or orchestral accompaniment for any violin piece I want to play.

Today’s post shares 10 other recommended tools for music lovers from my fellow writer and friend, Chris Dalla Riva, who writes Can’t Get Much Higher, a popular Substack focused on the intersection of music and data. I invited Chris to share with you his favorite resources for discovering, learning, and creating music.

Sections include:

  • Learn about Music
  • Discover New Music
  • Learn an Instrument
  • Tools for Artists
 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 




BIG unveils Suzhou Museum of Contemporary Art topped with ribbon-like roof — from dezeen.com by Christina Yao
.

Also from Dezeen:

MVRDV designs giant sphere for sports arena in Tirana — from dezeen.com by Starr Charles
.



 
© 2025 | Daniel Christian