What AI-Generated Voice Technology Means For Creators And Brands — from bitrebels.com by Ryan Mitchell

Voice has become one of the most influential elements in how digital content is experienced. From podcasts and videos to apps, ads, and interactive platforms, spoken audio shapes how messages are understood and remembered. In recent years, the rise of the ai voice generator has changed how creators and brands approach audio production, lowering barriers while expanding creative possibilities.

Rather than relying exclusively on traditional voice recording, many teams now use AI-generated voices as part of their content and brand strategies. This shift is not simply about efficiency; it reflects broader changes in how digital experiences are produced, scaled, and personalised.

The Future Role Of AI-Generated Voice
As AI voice technology continues to improve, its role in creative and brand workflows will likely expand. Future developments may include more adaptive voices that respond to context, audience behaviour, or emotional cues in real time. Rather than replacing traditional voice work, AI-generated voice is becoming another option in a broader creative toolkit, one that offers speed, flexibility, and accessibility.

 

Shoppers will soon be able to make purchases directly through Google’s Gemini app and browser.



Google and Walmart Join Forces to Shape the Future of Retail — from adweek.com by Lauren Johnson
At NRF, Sundar Pichai and John Furner revealed how AI and drones will shape shopping in 2026 and beyond

One of the biggest reveals is that shoppers will be able to purchase Walmart and Sam’s Club products through Google’s AI chatbot Gemini.


 

How Your Learners *Actually* Learn with AI — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 37.5 million AI chats show us about how learners use AI at the end of 2025 — and what this means for how we design & deliver learning experiences in 2026

Last week, Microsoft released a similar analysis of a whopping 37.5 million Copilot conversations. These conversation took place on the platform from January to September 2025, providing us with a window into if and how AI use in general — and AI use among learners specifically – has evolved in 2025.

Microsoft’s mass behavioural data gives us a detailed, global glimpse into what learners are actually doing across devices, times of day and contexts. The picture that emerges is pretty clear and largely consistent with what OpenAI’s told us back in the summer:

AI isn’t functioning primarily as an “answers machine”: the majority of us use AI as a tool to personalise and differentiate generic learning experiences and – ultimately – to augment human learning.

Let’s dive in!

Learners don’t “decide” to use AI anymore. They assume it’s there, like search, like spellcheck, like calculators. The question has shifted from “should I use this?” to “how do I use this effectively?”


8 AI Agents Every HR Leader Needs To Know In 2026 — from forbes.com by Bernard Marr

So where do you start? There are many agentic tools and platforms for AI tasks on the market, and the most effective approach is to focus on practical, high-impact workflows. So here, I’ll look at some of the most compelling use cases, as well as provide an overview of the tools that can help you quickly deliver tangible wins.

Some of the strongest opportunities in HR include:

  • Workforce management, administering job satisfaction surveys, monitoring and tracking performance targets, scheduling interventions, and managing staff benefits, medical leave, and holiday entitlement.
  • Recruitment screening, automatically generating and posting job descriptions, filtering candidates, ranking applicants against defined criteria, identifying the strongest matches, and scheduling interviews.
  • Employee onboarding, issuing new hires with contracts and paperwork, guiding them to onboarding and training resources, tracking compliance and completion rates, answering routine enquiries, and escalating complex cases to human HR specialists.
  • Training and development, identifying skills gaps, providing self-service access to upskilling and reskilling opportunities, creating personalized learning pathways aligned with roles and career goals, and tracking progress toward completion.

 

 
 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 

7 Legal Tech Trends That Will Reshape Every Business In 2026 — from forbes.com by Bernard Marr

Here are the trends that will matter most.

  1. AI Agents As Legal Assistants
  2. AI As A Driver Of Business Strategy
  3. Automation In Judicial Administration
  4. Always-On Compliance Monitoring
  5. Cybersecurity As An Essential Survival Tool
  6. Predictive Litigation
  7. Compliance As Part Of The Everyday Automation Fabric

According to the Thomson Reuters Future Of Professionals report, most experts already expect AI to transform their work within five years, with many viewing it as a positive force. The challenge now is clear: legal and compliance leaders must understand the tools reshaping their field and prepare their teams for a very different way of working in 2026.


Addendum on 12/17/25:

 

Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six evidence-based use cases to try in Google’s latest image-generating AI tool

While it’s true that Nano Banana generates better infographics than other AI models, the conversation has so far massively under-sold what’s actually different and valuable about this tool for those of us who design learning experiences.

What this means for our workflow:

Instead of the traditional “commission ? wait ? tweak ? approve ? repeat” cycle, Nano Banana enables an iterative, rapid-cycle design process where you can:

  • Sketch an idea and see it refined in minutes.
  • Test multiple visual metaphors for the same concept without re-briefing a designer.
  • Build 10-image storyboards with perfect consistency by specifying the constraints once, not manually editing each frame.
  • Implement evidence-based strategies (contrasting cases, worked examples, observational learning) that are usually too labour-intensive to produce at scale.

This shift—from “image generation as decoration” to “image generation as instructional scaffolding”—is what makes Nano Banana uniquely useful for the 10 evidence-based strategies below.

 


 


 

Agents, robots, and us: Skill partnerships in the age of AI — from mckinsey.com by Lareina Yee, Anu Madgavkar, Sven Smit, Alexis Krivkovich, Michael Chui, María Jesús Ramírez, and Diego Castresana
AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines.

At a glance

  • Work in the future will be a partnership between people, agents, and robots—all powered by AI. …
  • Most human skills will endure, though they will be applied differently. …
  • Our new Skill Change Index shows which skills will be most and least exposed to automation in the next five years….
  • Demand for AI fluency—the ability to use and manage AI tools—has grown sevenfold in two years…
  • By 2030, about $2.9 trillion of economic value could be unlocked in the United States…

Also related/see:



State of AI: December 2025 newsletter — from nathanbenaich.substack.com by Nathan Benaich
What you’ve got to know in AI from the last 4 weeks.

Welcome to the latest issue of the State of AI, an editorialized newsletter that covers the key developments in AI policy, research, industry, and start-ups over the last month.


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 

Law Firm 2.0: A Trillion-Dollar Market Begins To Move — from abovethelaw.com by Ken Crutchfield
The test cases for Law Firm 2.0 are arriving faster than many expected.

A move to separate legal advice from other legal services that don’t require advice is a big shift that would ripple through established firms and also test regulatory boundaries.

The LegalTech Fund (TLTF) sees a $1 trillion opportunity to reinvent legal services through the convergence of technology, regulatory changes, and innovation. TLTF calls this movement Law Firm 2.0, and the fund believes a reinvention will pave the way for entirely new, tech-enabled models of legal service delivery.


From Paper to Platform: How LegalTech Is Revolutionizing the Practice of Law — from markets.financialcontent.com by AB Newswire

For decades, practicing law has been a business about paper — contracts, case files, court documents, and floor-to-ceiling piles of precedent. But as technology transforms all aspects of modern-day business, law firms and in-house legal teams are transforming along with it. The development of LegalTech has revolutionized what was previously a paper-driven, manpower-intensive profession into a data-driven digital web of collaboration and automation.

Conclusion: Building the Future of Law
The practice of law has always been about accuracy, precedent, and human beings. Technology doesn’t alter that — it magnifies it. The shift to the platform from paper is about liberating lawyers from back-office tasks so they can concentrate on strategy, advocacy, and creativity.

By coupling intelligent automation with moral obligation, today’s firms are positioning the legal profession for a more intelligent, responsive industry. LegalTech isn’t about automation, it’s about empowering attorneys to practice at the speed of today’s business.


What Legal Can Learn from Other Industries’ AI Transformations — from jdsupra.com

Artificial intelligence has already redefined how industries like finance, healthcare, and supply chain operate — transforming once-manual processes into predictive, data-driven engines of efficiency.

Yet the legal industry, while increasingly open to innovation, still lags behind its peers in adopting automation at scale. As corporate legal departments face mounting pressure to do more with less, they have an opportunity to learn from how other sectors successfully integrated AI into their operations.

The message is clear: AI transformation doesn’t just change workflows — it changes what’s possible.


7 Legal Tech Trends To Watch In 2026 — from lexology.com


Small Language Models Are Changing Legal Tech: What That Means for Lawyers and Law Firms — from community.nasscom.in

The legal profession is at a turning point. Artificial intelligence tools are moving from novelty to everyday utility, and small language models, or SLMs, are a major reason why. For law firms and in-house legal teams that are balancing client confidentiality, tight budgets, and the need to move faster, SLMs offer a practical, high impact way to bring legal AI into routine practice. This article explains what SLMs are, why they matter to lawyers, where they fit in legal workflows, and how to adopt them responsibly.


Legal AI startup draws new $50 million Blackstone investment, opens law firm — from reuters.com by Sara Merken

NEW YORK, Nov 20 (Reuters) – Asset manager Blackstone (BX.N), opens new tab has invested $50 million in Norm Ai, a legal and compliance technology startup that also said on Thursday that it is launching an independent law firm that will offer “AI-native legal services.”

Lawyers at the new New York-based firm, Norm Law LLP, will use Norm Ai’s artificial intelligence technology to do legal work for Blackstone and other financial services clients, said Norm Ai founder and CEO John Nay.


Law School Toolbox Podcast Episode 531: What Law Students Should Know About New Legal Tech (w/Gabe Teninbaum) — from jdsupra.com

Today, Alison and Gabe Teninbaum — law professor and creator of SpacedRepetition.com — discuss how technology is rapidly transforming the legal profession, emphasizing the importance for law students and lawyers to develop technological competence and adapt to new tools and roles in the legal profession.  


New York is the San Francisco of legal tech — from businessinsider.com by Melia Russell

  • Legal tech ?? NYC.
  • To win the market, startups say they need to be where the law firms and corporate legal chiefs are.
  • Legora and Harvey are expanding their footprints in New York, as Clio hunts for office space.

Legal Tech Startups Expand in New York to Access Law Firms — from indexbox.io

Several legal technology startups are expanding their physical presence in New York City, according to a report from Legal tech NYC. The companies state that to win market share, they need to be located where major law firms and corporate legal departments are based.


Linklaters unveils 20-strong ‘AI lawyer’ team — from legalcheek.com by Legal Cheek

Magic Circle giant Linklaters has launched a team of 20 ‘AI Lawyers’ (yes, that is their actual job title) as it ramps up its commitment to artificial intelligence across its global offices.

The new cohort is a mix of external tech specialists and Linklaters lawyers who have decided to boost their legal expertise with advanced AI know-how. They will be placed into practice groups around the world to help build prompts, workflows and other tech driven processes that the firm hopes will sharpen client delivery.


I went to a closed-door retreat for top lawyers. The message was clear: Don’t fear AI — use it. — from businessinsider.com by Melia Russell

  • AI is making its mark on law firms and corporate legal teams.
  • Clients expect measurable savings, and firms are spending real money to deliver them.
  • At TLTF Summit, Big Law leaders and legal-tech builders explored the future of the industry.

From Cost Center to Command Center: The Future of Litigation is Being Built In-House — from law.stanford.edu by Adam Rouse,  Tamra Moore, Renee Meisel, Kassi Burns, & Olga Mack

Litigation isn’t going away, but who leads, drafts, and drives it is rapidly changing. Empirical research shows corporate legal departments have steadily expanded litigation management functions over the past decade. (Annual Litigation Trends Survey, Norton Rose Fulbright (2025)).

For decades, litigation lived squarely in the law firm domain. (Wald, Eli, Getting in and Out of the House: Career Trajectories of In-House Lawyers, Fordham Law Review, Vol. 88, No. 1765, 2020 (June 22, 2020)). Corporate legal departments played a responsive role: approving strategies, reviewing documents, and paying hourly rates. But through dozens of recent conversations with in-house legal leaders, legal operations professionals, and litigation specialists, a new reality is emerging. One in which in-house counsel increasingly owns the first draft, systematizes their litigation approach, and reshapes how outside counsel fits into the picture.

AI, analytics, exemplar libraries, playbooks, and modular document builders are not simply tools. They are catalysts for a structural shift. Litigation is becoming modular, data-informed, and orchestrated by in-house teams who increasingly want more than cost control. They want consistency, clarity, and leverage. This piece outlines five major trends from our qualitative research, predictions on their impact to the practice of law, and research questions that are worth considering to further understand these trends. A model is then introduced for understanding how litigation workflows and outside counsel relationships will evolve in the coming years.

 

AI’s Role in Online Learning > Take It or Leave It with Michelle Beavers, Leo Lo, and Sara McClellan — from intentionalteaching.buzzsprout.com by Derek Bruff

You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand.




 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 

Clio Completes Historic $1 Billion vLex Acquisition, Announces $500 Million Series G at $5 Billion Valuation, Plus Exclusive Interview with CEO and CFO — from lawnext.com

Legal technology company Clio has completed its $1 billion acquisition of vLex, marking the conclusion of the largest deal in legal tech history, and has simultaneously closed a $500 million Series G funding round, along with a $350 million debt facility, valuing the combined company at $5 billion, and clearing the way to move forward on creating an unprecedented unified platform that spans both the business and practice of law.

With the deal now closed, Clio becomes a company with $400 million in annual recurring revenue and a customer base of 400,000 legal professionals, it says.

“This is a defining moment for Clio and for the legal industry,” said Jack Newton, Clio’s founder and CEO. “We founded Clio to transform the legal experience for all, and this milestone brings that mission to a new horizon.”

The transaction brings vLex’s 350-plus employees – including experts in law, data and technology – into Clio’s organization, creating what Newton calls “the world’s most powerful legal intelligence platform, a platform that will define how legal work is done for generations to come.”

By combining practice management, research, drafting, and firm operations into connected AI-powered workflows, the platform aims to enable legal professionals to move from insight to action with greater speed and precision.

 

ElevenLabs just launched a voice marketplace — from elevenlabs.io; via theaivalley.com

Via the AI Valley:

Why does it matter?
AI voice cloning has already flooded the internet with unauthorized imitations, blurring legal and ethical lines. By offering a dynamic, rights-secured platform, ElevenLabs aims to legitimize the booming AI voice industry and enable transparent, collaborative commercialization of iconic IP.
.

ElevenLabs just launched a voice marketplace

ElevenLabs just launched a voice marketplace


[GIFTED ARTICLE] How people really use ChatGPT, according to 47,000 conversations shared online — from by Gerrit De Vynck and Jeremy B. Merrill
What do people ask the popular chatbot? We analyzed thousands of chats to identify common topics discussed by users and patterns in ChatGPT’s responses.

.
Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work.

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his own research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”


Per The Rundown: OpenAI just shared its view on AI progress, predicting systems will soon become smart enough to make discoveries and calling for global coordination on safety, oversight, and resilience as the technology nears superintelligent territory.

The details:

  • OpenAI said current AI systems already outperform top humans in complex intellectual tasks and are “80% of the way to an AI researcher.”
  • The company expects AI will make small scientific discoveries by 2026 and more significant breakthroughs by 2028, as intelligence costs fall 40x per year.
  • For superintelligent AI, OAI said work with governments and safety agencies will be essential to mitigate risks like bioterrorism or runaway self-improvement.
  • It also called for safety standards among top labs, a resilience ecosystem like cybersecurity, and ongoing tracking of AI’s real impact to inform public policy.

Why it matters: While the timeline remains unclear, OAI’s message shows that the world should start bracing for superintelligent AI with coordinated safety. The company is betting that collective safeguards will be the only way to manage risk from the next era of intelligence, which may diffuse in ways humanity has never seen before.

Which linked to:

  • AI progress and recommendations — from openai.com
    AI is unlocking new knowledge and capabilities. Our responsibility is to guide that power toward broad, lasting benefit.

From DSC:
I hate to say this, but it seems like there is growing concern amongst those who have pushed very hard to release as much AI as possible — they are NOW worried. They NOW step back and see that there are many reasons to worry about how these technologies can be negatively used.

Where was this level of concern before (while they were racing ahead at 180 mph)? Surely, numerous and knowledgeable people inside those organizations warned them about the destructive/downside of these technologies. But their warnings were pretty much blown off (at least from my limited perspective). 


The state of AI in 2025: Agents, innovation, and transformation — from mckinsey.com

Key findings

  1. Most organizations are still in the experimentation or piloting phase: Nearly two-thirds of respondents say their organizations have not yet begun scaling AI across the enterprise.
  2. High curiosity in AI agents: Sixty-two percent of survey respondents say their organizations are at least experimenting with AI agents.
  3. Positive leading indicators on impact of AI: Respondents report use-case-level cost and revenue benefits, and 64 percent say that AI is enabling their innovation. However, just 39 percent report EBIT impact at the enterprise level.
  4. High performers use AI to drive growth, innovation, and cost: Eighty percent of respondents say their companies set efficiency as an objective of their AI initiatives, but the companies seeing the most value from AI often set growth or innovation as additional objectives.
  5. Redesigning workflows is a key success factor: Half of those AI high performers intend to use AI to transform their businesses, and most are redesigning workflows.
  6. Differing perspectives on employment impact: Respondents vary in their expectations of AI’s impact on the overall workforce size of their organizations in the coming year: 32 percent expect decreases, 43 percent no change, and 13 percent increases.

Marble: A Multimodal World Model — from worldlabs.ai

Spatial intelligence is the next frontier in AI, demanding powerful world models to realize its full potential. World models should reconstruct, generate, and simulate 3D worlds; and allow both humans and agents to interact with them. Spatially intelligent world models will transform a wide variety of industries over the coming years.

Two months ago we shared a preview of Marble, our World Model that creates 3D worlds from image or text prompts. Since then, Marble has been available to an early set of beta users to create 3D worlds for themselves.

Today we are making Marble, a first-in-class generative multimodal world model, generally available for anyone to use. We have also drastically expanded Marble’s capabilities, and are excited to highlight them here:

 
© 2025 | Daniel Christian