NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey
PLUS: How to level up your AI workflows for 2025…

NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000).

And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy

But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”


Jensen Huang’s latest CES speech: AI Agents are expected to become the next robotics industry, with a scale reaching trillions of dollars — from chaincatcher.com

NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.

At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.

Also see:


NVIDIA Project DIGITS: The World’s Smallest AI Supercomputer. — from nvidia.com
A Grace Blackwell AI Supercomputer on your desk.


From DSC:
I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.


OpenAI ‘now knows how to build AGI’ — from therundown.ai by Rowan Cheung
PLUS: AI phishing achieves alarming success rates

The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.

The details:

  • Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
  • Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
  • Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
  • AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.

Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.

***

“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.”from Samsung


Understanding And Preparing For The 7 Levels Of AI Agents — from forbes.com by Douglas B. Laney

The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.


The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello
Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.

AI Agents Enhance Generative AI’s Impact
Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”


Here’s what nobody is telling you about AI agents in 2025 — from aidisruptor.ai by Alex McFarland
What’s really coming (and how to prepare). 

Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.

Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.

This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.

Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.


Why 2025 will be the year of AI orchestration — from venturebeat.com by Emilia David|

In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.

As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.

In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.

Here are some themes we expect to see more in 2025.


Predictions For AI In 2025: Entrepreneurs Look Ahead — from forbes.com by Jodie Cook

AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.

“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”

Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.

Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”

Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.



These 4 graphs show where AI is already impacting jobs — from fastcompany.com by Brandon Tucker
With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce. 

To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.

Why does this shift in AI titles matter for every industry?

 

Introducing the 2025 Wonder Media Calendar for tweens, teens, and their families/households. Designed by Sue Ellen Christian and her students in her Global Media Literacy class (in the fall 2024 semester at Western Michigan University), the calendar’s purpose is to help people create a new year filled with skills and smart decisions about their media use. This calendar is part of the ongoing Wonder Media Library.com project that includes videos, lesson plans, games, songs and more. The website is funded by a generous grant from the Institute of Museum and Library Services, in partnership with Western Michigan University and the Library of Michigan.


 

 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 

94% of Consumers are Satisfied with Virtual Primary Care — from hitconsultant.net

Excerpt from What You Should Know (emphasis DSC):

  • For people who have used virtual primary care, the vast majority of them (94%) are satisfied with their experience, and nearly four in five (79%) say it has allowed them to take charge of their health. The study included findings around familiarity and experience with virtual primary care, virtual primary care and chronic conditions, current health and practices, and more.
  • As digital health technology continues to advance and the healthcare industry evolves, many Americans want the ability to utilize more digital methods when it comes to managing their health, according to a study recently released by Elevance Health — formerly Anthem, Inc. Elevance Health commissioned to conduct an online study of over 5,000 US adults age 18+ around virtual primary care.
 

The Top 10 Digital Health Stories Of 2022 — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Edging towards the end of the year, it is time for a summary of how digital health progressed in 2022. It is easy to get lost in the noise – I myself shared well over a thousand articles, studies and news items between January and the end of November 2022. Thus, just like in 20212020 (and so on), I picked the 10 topics I believe will have the most significance in the future of healthcare.

9. Smart TVs Becoming A Remote Care Platform
The concept of turning one’s TV into a remote care hub isn’t new. Back in 2012, researchers designed a remote health assistance system for the elderly to use through a TV set. But we are exploring this idea now as a major tech company has recently pushed for telehealth through TVs. In early 2022, electronics giant LG announced that its smart TVs will be equipped with the remote health platform Independa. 

And in just a few months (late November) came a follow-up: a product called Carepoint TV Kit 200L, in beta testing now. Powered by Amwell’s Converge platform, the product is aimed at helping clinicians more easily engage with patients amid healthcare’s workforce shortage crisis.

Also relevant/see:

Asynchronous Telemedicine Is Coming And Here Is Why It’s The Future Of Remote Care — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Asynchronous telemedicine is one of those terms we will need to get used to in the coming years. Although it may sound alien, chances are you have been using some form of it for a while.

With the progress of digital health, especially due to the pandemic’s impact, remote care has become a popular approach in the healthcare setting. It can come in two forms: synchronous telemedicine and asynchronous telemedicine.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

Augmented Books Are On The Way According To Researchers — from vrscout.com by Kyle Melnick

Excerpt:

Imagine this. You’re several chapters into a captivating novel when a character from an earlier book makes a surprise appearance. You swipe your finger across their name on the page at which point their entire backstory is displayed on a nearby smartphone, allowing you to refresh your memory before moving forward.

This may sound like science fiction, but researchers at the University of Surrey in England say that the technology described above is already here in the form of “a-books” (augmented reality books).

The potential use-cases for such a technology are virtually endless. As previously mentioned, a-books could be used to deliver character details and plot points for a variety of fictional works. The same technology could also be applied to textbooks, allowing students to display helpful information on their smartphones, tablets, and smart TVs with the swipe of a finger.

From DSC:

  • How might instructional designers use this capability?
  • How about those in theatre/drama?
  • Educational gaming?
  • Digital storytelling?
  • Interaction design?
  • Interface design?
  • User experience design?

Also see:


 

Airbnb’s design for employees to live and work anywhere — from news.airbnb.com; with thanks to Tom Barrett for this resource

Excerpt:

Airbnb is in the business of human connection above all else, and we believe that the most meaningful connections happen in person. Zoom is great for maintaining relationships, but it’s not the best way to deepen them. Additionally, some creative work and collaboration is best done when you’re in the same room. I’d like working at Airbnb to feel like you’re working at one of the most creative places on Earth, and this will only happen with some in-person collaboration time.

The right solution should combine the best of the digital world and the best of the physical world. It should have the efficiency of Zoom, while providing the meaningful human connection that only happens when people come together. We have a solution that we think combines the best of both worlds.

We’ve designed a way for you to live and work anywhere—while collaborating in a highly coordinated way, and experiencing the in-person connection that makes Airbnb special. Our design has five key features…

Now, a thought exercise on that item from Tom Barrett:

While you are there, extend the thought experiment and imagine the new policy for a school, college or university.

  1. You can work from home or the office
  2. You can move anywhere in the country you work in, and your compensation won’t change
  3. You have the flexibility to travel and work around the world
  4. We’ll meet up regularly for team gatherings, off-sites, and social events
  5. We’ll continue to work in a highly coordinated way

From DSC:
As a reflection on this thought experiment, this graphic comes to my mind again. Teachers, professors, trainers, staff, and students can be anywhere in the world:

Learning from the living class room

 

 

We need to use more tools — that go beyond screen sharing — where we can collaborate regardless of where we’re at. [Christian]

From DSC:
Seeing the functionality in Freehand — it makes me once again think that we need to use more tools where faculty/staff/students can collaborate with each other REGARDLESS of where they’re coming in to partake in a learning experience (i.e., remotely or physically/locally). This is also true for trainers and employees, teachers and students, as well as in virtual tutoring types of situations. We need tools that offer functionalities that go beyond screen sharing in order to collaborate, design, present, discuss, and create things.  (more…)

 

Now we just need a “Likewise TV” for learning-related resources! [Christian]

Likewise TV Brings Curation to Streaming — from lifewire.com by Cesar Aroldo-Cadenas
And it’s available on iOS, Android, and some smart TVs

All your streaming services in one place. One search. One watchlist. Socially powered recommendations.

Entertainment startup Likewise has launched a new recommendations hub that pulls from all the different streaming platforms to give you personalized picks.

Likewise TV is a streaming hub powered by machine learning, people from the Likewise community, and other streaming services. The service aims to do away with mindlessly scrolling through a menu, looking for something to watch, or jumping from one app to another by providing a single location for recommendations.

Note that Likewise TV is purely an aggregator.


Also see:

Likewise TV -- All your streaming services in one place. One search. One watchlist. Socially powered recommendations.

 


From DSC:
Now we need this type of AI-based recommendation engine, aggregator, and service for learning-related resources!

I realize that we have a long ways to go here — as a friend/former colleague of mine just reminded me that these recommendation engines often miss the mark. I’m just hoping that a recommendation engine like this could ingest our cloud-based learner profiles and our current goals and then present some promising learning-related possibilities for us. Especially if the following graphic is or will be the case in the future:


Learning from the living class room


Also relevant/see:

From DSC:
Some interesting/noteworthy features:

  • “The 32- inch display has Wi-Fi capabilities to supports multiple streaming services, can stream smartphone content, and comes with a removable SlimFit Cam.”
  • The M8 has Wi-Fi connectivity for its native streaming apps so you won’t have to connect to a computer to watch something on Netflix. And its Far Field Voice mic can be used w/ the Always On feature to control devices like Amazon Alexa with your voice, even if the monitor is off.
  • “You can also connect devices to the monitor via the SmartThings Hub, which can be tracked with the official SmartThings app.”

I wonder how what we call the TV (or television) will continue to morph in the future.


Addendum on 3/31/22 from DSC:
Perhaps people will co-create their learning playlists…as is now possible with Spotify’s “Blend” feature:

Today’s Blend update allows you to share your personal Spotify playlists with your entire group chat—up to 10 users. You can manually invite these friends and family members to join you from in the app, then Spotify will create a playlist for you all to listen to using a mixture of everyone’s music preferences. Spotify will also create a special share card that everyone in the group can use to save and share the created playlist in the future.


 
© 2025 | Daniel Christian