NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey
PLUS: How to level up your AI workflows for 2025…

NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000).

And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy

But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”


Jensen Huang’s latest CES speech: AI Agents are expected to become the next robotics industry, with a scale reaching trillions of dollars — from chaincatcher.com

NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.

At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.

Also see:


NVIDIA Project DIGITS: The World’s Smallest AI Supercomputer. — from nvidia.com
A Grace Blackwell AI Supercomputer on your desk.


From DSC:
I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.


OpenAI ‘now knows how to build AGI’ — from therundown.ai by Rowan Cheung
PLUS: AI phishing achieves alarming success rates

The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.

The details:

  • Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
  • Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
  • Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
  • AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.

Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.

***

“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.”from Samsung


Understanding And Preparing For The 7 Levels Of AI Agents — from forbes.com by Douglas B. Laney

The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.


The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello
Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.

AI Agents Enhance Generative AI’s Impact
Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”


Here’s what nobody is telling you about AI agents in 2025 — from aidisruptor.ai by Alex McFarland
What’s really coming (and how to prepare). 

Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.

Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.

This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.

Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.


Why 2025 will be the year of AI orchestration — from venturebeat.com by Emilia David|

In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.

As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.

In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.

Here are some themes we expect to see more in 2025.


Predictions For AI In 2025: Entrepreneurs Look Ahead — from forbes.com by Jodie Cook

AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.

“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”

Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.

Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”

Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.



These 4 graphs show where AI is already impacting jobs — from fastcompany.com by Brandon Tucker
With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce. 

To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.

Why does this shift in AI titles matter for every industry?

 

How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein

Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.

Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.

Here are the five most popular stories that Education Week published in 2024 about AI in schools.


What’s next with AI in higher education? — from msn.com by Science X Staff

Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:

1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change


5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns

 


AI in 2024: Insights From our 5 Million Readers — from linkedin.com by Generative AI

Checking the Pulse: The Impact of AI on Everyday Lives
So, what exactly did our users have to say about how AI transformed their lives this year?
.

Top 2024 Developments in AI

  1. Video Generation…
  2. AI Employees…
  3. Open Source Advancements…

Getting ready for 2025: your AI team members (Gift lesson 3/3) — from flexos.com by Daan van Rossum

And that’s why today, I’ll tell you exactly which AI tools I’ve recommended for the top 5 use cases to almost 200 business leaders who took the Lead with AI course.

1. Email Management: Simplifying Communication with AI

  • Microsoft Copilot for Outlook. …
  • Gemini AI for Gmail. …
  • Grammarly. …

2. Meeting Management: Maximize Your Time

  • Otter.ai. …
  • Copilot for Microsoft Teams. …
  • Other AI Meeting Assistants. Zoom AI Companion, Granola, and Fathom

3. Research: Streamlining Information Gathering

  • ChatGPT. …
  • Perplexity. …
  • Consensus. …

…plus several more items and tools that were mentioned by Daan.

 

Introducing the 2025 Wonder Media Calendar for tweens, teens, and their families/households. Designed by Sue Ellen Christian and her students in her Global Media Literacy class (in the fall 2024 semester at Western Michigan University), the calendar’s purpose is to help people create a new year filled with skills and smart decisions about their media use. This calendar is part of the ongoing Wonder Media Library.com project that includes videos, lesson plans, games, songs and more. The website is funded by a generous grant from the Institute of Museum and Library Services, in partnership with Western Michigan University and the Library of Michigan.


 

 

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 

Best of 2024 — from wondertools.substack.com by Jeremy Caplan
12 of my favorites this year

I tested hundreds of new tools this year. Many were duplicative. A few stuck with me because they’re so useful. The dozen noted below are helping me mine insights from notes, summarize meetings, design visuals— even code a little, without being a developer. You can start using any of these in minutes — no big budget or prompt engineering PhD required.

 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

Focus on School-to-Home Resources for the Holidays

Focus on School-to-Home Resources for the Holidays — from classtechtips.com by Monica Burns

As the holiday season approaches, families might reach out to educators looking for resources to support learning over school break. By leveraging high-quality, vetted resources, you can ensure that the school-to-home connection stays strong, even during winter break. Today on the blog, I’m excited to share some resources for the holidays from the team at Ask, Listen, Learn.


Also see:

Tech-Savvy Approaches for a Differentiated Classroom with Dr. Clare Kilbane and Dr. Natalie Milman – Easy EdTech Podcast 296

Tech-Savvy Approaches for a Differentiated Classroom with Dr. Clare Kilbane and Dr. Natalie Milman – Easy EdTech Podcast 296

In this episode, educational leaders and fellow ASCD authors Dr. Clare Kilbane and Dr. Natalie Milman share expert strategies for using EdTech to personalize learning. Explore insights from their book, Using Technology in a Differentiated Classroom, and discover practical tips to thoughtfully integrate digital tools and support diverse learners. If you’re ready to elevate all students’ learning journeys, this episode is a must-listen!


Also see:

Lesson planning resource -- short, engaging videos for students

Lesson planning resource — short, engaging videos for students
This post is sponsored by ClickView. All opinions are my own.

Where do you go to find engaging, high-quality content for your lesson plans? Searching for short videos for students might feel like a time consuming task. As a classroom teacher there were plenty of times when I knew watching a video clip would help students better understand a concept, but I couldn’t always find the right videos to share with them. ClickView is a platform with video resources curated just for K-12 educators.

Today on the blog we’ll take a look at ClickView, a video platform for K-12 schools designed to make lesson planning easier. Whether you’re introducing a new topic or diving deeper into a complex unit, ClickView’s range of videos and innovative features can transform the way you teach.

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 


 

AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 



Introducing OpenAI o1 – from openai.com

We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.




Something New: On OpenAI’s “Strawberry” and Reasoning — from oneusefulthing.org by Ethan Mollick
Solving hard problems in new ways

The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.

To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.


What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack

The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.

Recently, many creators (myself included) have been exploring super realistic AI more and more.

But where can this actually be used?

Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.

Heather goes on to mention applications in:

  • Creative Industries
  • Entertainment and Media
  • Education and Training

NotebookLM now lets you listen to a conversation about your sources — from blog.google by Biao Wang
Our new Audio Overview feature can turn documents, slides, charts and more into engaging discussions with one click.

Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.


Bringing generative AI to video with Adobe Firefly Video Model — from blog.adobe.com by Ashley Still

Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.

We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

 

The Most Popular AI Tools for Instructional Design (September, 2024) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The tools we use most, and how we use them

This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.

My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?

Here’s where we are in September, 2024:


Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby,  Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)

As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.



The Impact of AI in Advancing Accessibility for Learners with Disabilities — from er.educause.edu by Rob Gibson

AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.


 

Using Video Projects to Reinforce Learning in Math — from edutopia.org by Alessandra King
A collaborative project can help students deeply explore math concepts, explain problem-solving strategies, and demonstrate their learning.

To this end, I assign video projects to my students. In groups of two or three, they solve a set of problems on a topic and then choose one to illustrate, solve, and explain their favorite problem-solving strategy in detail, along with the reasons they chose it. The student-created videos are collected and stored on a Padlet even after I have evaluated them—kept as a reference, keepsake, and support. I have a library of student-created videos that benefit current and future students when they have some difficulties with a topic and associated problems.

 
© 2025 | Daniel Christian