Imagine with me for a moment: Training is no longer confined to scheduled sessions in a classroom, an online module or even a microlearning you click to activate during your workflow. Imagine training being delivered because the system senses what you are doing and provides instructions and job aids without you having to take an action.
The rapid evolution of artificial intelligence (AI) and wearable technology has made it easier than ever to seamlessly integrate learning directly into the workflow. Smart glasses, earpieces, and other advanced devices are redefining how employees gain knowledge and skills by delivering microlearning moments precisely when and where they are needed.
AI plays a crucial role in this transformation by sensing the optimal moment to deliver the training through augmented reality (AR).
Kennelly and Geraffo are part of a small team at their school in Denver, DSST: College View High School, that is participating in the School Teams AI Collaborative, a year-long pilot initiative in which more than 80 educators from 19 traditional public and charter schools across the country are experimenting with and evaluating AI-enabled instruction to improve teaching and learning.
The goal is for some of AI’s earliest adopters in education to band together, share ideas and eventually help lead the way on what they and their colleagues around the U.S. could do with the emerging technology.
“Pretty early on we thought it was going to be a massive failure,” says Kennelly of last semester’s project. “But it became a huge hit. Students loved it. They were like, ‘I ran to second period to build this thing.’”
As writing instructors, we have a choice in how we frame AI for our students. I invite you to:
Experiment with AI as a conversation partner yourself before introducing it to students
Design assignments that leverage AI’s strengths as a thought partner rather than trying to “AI-proof” your existing assignments
Explicitly teach students how to engage in productive dialogue with AI—how to ask good questions, challenge AI’s assumptions, and use it to refine rather than replace their thinking
Share your experiences, both positive and negative, with colleagues to build our collective understanding of effective AI integration
NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey PLUS: How to level up your AI workflows for 2025…
NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000). … And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy…
… But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”
NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.
At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.
From DSC: I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.
The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.
The details:
Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.
Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.
***
“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.” — from Samsung
The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.
The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.
AI Agents Enhance Generative AI’s Impact Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”
Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.
Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.
This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.
Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.
In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.
As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.
In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.
Here are some themes we expect to see more in 2025.
AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.
“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”
… Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.
Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”
Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.
To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.
Why does this shift in AI titles matter for every industry?
This company’s experience offers three crucial lessons for other organizational leaders who may be contemplating cutting or reducing talent development investments in their 2025 budgets to focus on “growth.”
Leadership development isn’t a luxury – it’s a strategic imperative…
Succession planning must be an ongoing process, not a reactive measure…
The cost of developing leaders is far less than the cost of not having them when you need them most…
NVIDIA Digital Human Technologies Bring AI Characters to Life
Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications
Today is the beginning of our moonshot to solve embodied AGI in the physical world. I’m so excited to announce Project GR00T, our new initiative to create a general-purpose foundation model for humanoid robot learning.
Mark Zuckerberg, CEO, Meta, took part in the first-ever Metaverse interview using photo-realistic virtual avatars, demonstrating the Metaverse’s capability for virtual communication.
Zuckerberg appeared on the Lex Fridman podcast, using scans of both Fridman and Zuckerberg to create realistic avatars instead of using a live video feed. A computer model of the avatar’s faces and bodies are put into a Codec, using a headset to send an encoded version of the avatar.
The interview explored the future of AI in the metaverse, as well as the Quest 3 headset and the future of humanity.
“The [Ai Pin is a] connected and intelligent clothing-based wearable device uses a range of sensors that enable contextual and ambient compute interactions,” the company noted at the time. “The Ai Pin is a type of standalone device with a software platform that harnesses the power of Ai to enable innovative personal computing experiences.”
Also relevant/see:
Introducing Rewind Pendant – a wearable that captures what you say and hear in the real world!
? Rewind powered by truly everything you’ve seen, said, or heard
? Summarize and ask any question using AI
? Private by design
ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms). https://t.co/uNZjgbR5Bmpic.twitter.com/paG0hMshXb
For the IBC 2023 conference, Adobe announced new AI and 3D features to Creative Cloud video tools, including Premiere Pro Enhance Speech for faster dialog cleanup, and filler word detection and removal in Text-Based Editing. There’s also new AI-based rotoscoping and a true 3D workspace in the After Effects beta, as well as new camera-to-cloud integrations and advanced storage options in Frame.io.
Though not really about AI, you might also be interested in this posting:
The Airt AI Generator app makes it easy to create art on your iPad. You can pick an art style and a model to make your artwork. It’s simple enough for anyone to use, but it doesn’t have many options for customizing your art.
Even with these limitations, it’s a good starting point for people who want to try making art with AI. Here are the good and bad points we found.
Pros:
User-Friendly: The app is simple and easy to use, making it accessible for users of all skill levels.
Cons:
Limited Advanced Features: The app lacks options for customization, such as altering image ratios, seeds, and other settings.
Could this immersive AR experience revolutionize the culinary arts?
Earlier this month, the popular culinary livestreaming network Kittch announced that it is partnering with American technology company Qualcomm to create hands-free cooking experiences accessible via AR glasses.
From DSC: I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.
It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets. Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.
One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)). If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.
For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.
And/or perhaps this is a feature in our future videoconferencing applications.
But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.
Along these lines, see:
Tell me you’re not going to want a pair of #AR glasses ?
What is Book Creator? Tips & Tricks — from techlearning.com by Erik Ofgang Book Creator is a free tool that allows users to create multimedia ebooks
Excerpt:
Book Creator is a free education tool designed to enable students to engage with class material in a direct and active way by creating multimedia ebooks with a variety of functions.
Available as a web app on Chromebooks, laptops, and tablets, and also as a standalone iPad app, Book Creator is a digital resource that helps students explore their creative sides while learning.
The tool lends itself well to active learning and collaborative projects of all kinds, and is appropriate for various subjects and age groups.
Edging towards the end of the year, it is time for a summary of how digital health progressed in 2022. It is easy to get lost in the noise – I myself shared well over a thousand articles, studies and news items between January and the end of November 2022. Thus, just like in 2021, 2020 (and so on), I picked the 10 topics I believe will have the most significance in the future of healthcare.
9. Smart TVs Becoming A Remote Care Platform The concept of turning one’s TV into a remote care hub isn’t new. Back in 2012, researchers designed a remote health assistance system for the elderly to use through a TV set. But we are exploring this idea now as a major tech company has recently pushed for telehealth through TVs. In early 2022, electronics giant LG announced that its smart TVs will be equipped with the remote health platform Independa.
And in just a few months (late November) came a follow-up: a product called Carepoint TV Kit 200L, in beta testing now. Powered by Amwell’s Converge platform, the product is aimed at helping clinicians more easily engage with patients amid healthcare’s workforce shortage crisis.
Asynchronous telemedicine is one of those terms we will need to get used to in the coming years. Although it may sound alien, chances are you have been using some form of it for a while.
With the progress of digital health, especially due to the pandemic’s impact, remote care has become a popular approach in the healthcare setting. It can come in two forms: synchronous telemedicine and asynchronous telemedicine.
One such firm is Grungo Colarulo, a personal injury law firm with offices in New Jersey and Pennsylvania. Last December, the firm announced that it had set up shop in the virtual world known as Decentraland.
Users can enter the firm’s virtual office, where they can interact with the firm’s avatar. They can talk to the avatar to see whether they might need legal representation and then take down a phone number to call the firm in the physical world. If they’re already clients, they can arrive for meetings or consultations.
Richard Grungo Jr., co-founder and name partner at Grungo Colarulo, told the ABA Journal in December 2021 that he could see the potential of the metaverse to allow his firm to host webinars, CLEs and other virtual educational opportunities, as well as hosting charity events.
Grungo joined the ABA Journal’s Victor Li to talk about how lawyers can use the metaverse to market themselves, as well as legal issues relating to the technology that all users should be aware of.
From DSC: I post this to put this on the radars of legal folks out there. Law schools should join the legaltech folks in pulse-checking and covering/addressing emerging technologies. What the Metaverse and Web3 become is too early to tell. My guess is that we’ll see a lot more blending of the real world with the digital world — especially via Augmented Reality (AR).