Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

CES 2024: Unveiling The Future Of Legal Through Consumer Innovations — from abovethelaw.com by Stephen Embry
The ripple effects on the legal industry are real.

The Emerging Role of Smart TVs
Boothe and Comiskey claim that our TVs will become even smarter and better connected to the web and the internet. Our TVs will become an intelligent center for a variety of applications powered through our smartphone. TVs will be able to direct things like appliances and security cameras. Perhaps even more importantly, our TVs can become e-commerce centers, allowing us to speak with them and conduct business.

This increased TV capability means that the TV could become a more dominant mode of working and computing for lawyers. As TVs become more integrated with the internet and capable of functioning as communication hubs, they could potentially replace traditional computing devices in legal settings. With features like voice control and pattern recognition, TVs could serve as efficient tools for such things as document preparation and client meetings.

From DSC:
Now imagine the power of voice-enabled chatbots and the like. We could be videoconferencing (or holograming) with clients, and be able to access information at the same time. Language translation — like that in the Timekettle product — will be built in.

I also wonder how this type of functionality will play out in lifelong learning from our living rooms.

Learning from the Living AI-Based Class Room

 


Also, some other legaltech-related items:


Are Tomorrow’s Lawyers Prepared for Legal’s Tech Future? 4 Recent Trends Shaping Legal Education | Legaltech News — from law.com (behind paywall)

Legal Tech Predictions for 2024: Embracing a New Era of Innovation — from jdsupra.com

As we step into 2024, the legal industry continues to be reshaped by technological advancements. This year promises to bring new developments that could revolutionize how legal professionals work and interact with clients. Here are key predictions for legal tech in 2024:

Miss the Legaltech Week 2023 Year-in-Review Show? Here’s the Recording — from lawnext.com by Bob Ambrogi

Last Friday was Legaltech Week’s year-end show, in which our panel of journalists and bloggers picked the year’s top stories in legal tech and innovation.

So what were the top stories? Well, if you missed it, no worries. Here’s the video:

 

Where a developing, new kind of learning ecosystem is likely headed [Christian]

From DSC:
As I’ve long stated on the Learning from the Living [Class]Room vision, we are heading toward a new AI-empowered learning platform — where humans play a critically important role in making this new learning ecosystem work.

Along these lines, I ran into this site out on X/Twitter. We’ll see how this unfolds, but it will be an interesting space to watch.

Project Chiron's vision: Our vision for education Every child will soon have a super-intelligent AI teacher by their side. We want to make sure they instill a love of learning in children.


From DSC:
This future learning platform will also focus on developing skills and competencies. Along those lines, see:

Scale for Skills-First — from the-job.beehiiv.com by Paul Fain
An ed-tech giant’s ambitious moves into digital credentialing and learner records.

A Digital Canvas for Skills
Instructure was a player in the skills and credentials space before its recent acquisition of Parchment, a digital transcript company. But that $800M move made many observers wonder if Instructure can develop digital records of skills that learners, colleges, and employers might actually use broadly.

Ultimately, he says, the CLR approach will allow students to bring these various learning types into a coherent format for employers.

Instructure seeks a leadership role in working with other organizations to establish common standards for credentials and learner records, to help create consistency. The company collaborates closely with 1EdTech. And last month it helped launch the 1EdTech TrustEd Microcredential Coalition, which aims to increase quality and trust in digital credentials.

Paul also links to 1EDTECH’s page regarding the Comprehensive Learning Record

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 

94% of Consumers are Satisfied with Virtual Primary Care — from hitconsultant.net

Excerpt from What You Should Know (emphasis DSC):

  • For people who have used virtual primary care, the vast majority of them (94%) are satisfied with their experience, and nearly four in five (79%) say it has allowed them to take charge of their health. The study included findings around familiarity and experience with virtual primary care, virtual primary care and chronic conditions, current health and practices, and more.
  • As digital health technology continues to advance and the healthcare industry evolves, many Americans want the ability to utilize more digital methods when it comes to managing their health, according to a study recently released by Elevance Health — formerly Anthem, Inc. Elevance Health commissioned to conduct an online study of over 5,000 US adults age 18+ around virtual primary care.
 

The Top 10 Digital Health Stories Of 2022 — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Edging towards the end of the year, it is time for a summary of how digital health progressed in 2022. It is easy to get lost in the noise – I myself shared well over a thousand articles, studies and news items between January and the end of November 2022. Thus, just like in 20212020 (and so on), I picked the 10 topics I believe will have the most significance in the future of healthcare.

9. Smart TVs Becoming A Remote Care Platform
The concept of turning one’s TV into a remote care hub isn’t new. Back in 2012, researchers designed a remote health assistance system for the elderly to use through a TV set. But we are exploring this idea now as a major tech company has recently pushed for telehealth through TVs. In early 2022, electronics giant LG announced that its smart TVs will be equipped with the remote health platform Independa. 

And in just a few months (late November) came a follow-up: a product called Carepoint TV Kit 200L, in beta testing now. Powered by Amwell’s Converge platform, the product is aimed at helping clinicians more easily engage with patients amid healthcare’s workforce shortage crisis.

Also relevant/see:

Asynchronous Telemedicine Is Coming And Here Is Why It’s The Future Of Remote Care — from medicalfuturist.com by Dr. Bertalan Mesko

Excerpt:

Asynchronous telemedicine is one of those terms we will need to get used to in the coming years. Although it may sound alien, chances are you have been using some form of it for a while.

With the progress of digital health, especially due to the pandemic’s impact, remote care has become a popular approach in the healthcare setting. It can come in two forms: synchronous telemedicine and asynchronous telemedicine.

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

Augmented Books Are On The Way According To Researchers — from vrscout.com by Kyle Melnick

Excerpt:

Imagine this. You’re several chapters into a captivating novel when a character from an earlier book makes a surprise appearance. You swipe your finger across their name on the page at which point their entire backstory is displayed on a nearby smartphone, allowing you to refresh your memory before moving forward.

This may sound like science fiction, but researchers at the University of Surrey in England say that the technology described above is already here in the form of “a-books” (augmented reality books).

The potential use-cases for such a technology are virtually endless. As previously mentioned, a-books could be used to deliver character details and plot points for a variety of fictional works. The same technology could also be applied to textbooks, allowing students to display helpful information on their smartphones, tablets, and smart TVs with the swipe of a finger.

From DSC:

  • How might instructional designers use this capability?
  • How about those in theatre/drama?
  • Educational gaming?
  • Digital storytelling?
  • Interaction design?
  • Interface design?
  • User experience design?

Also see:


 

Airbnb’s design for employees to live and work anywhere — from news.airbnb.com; with thanks to Tom Barrett for this resource

Excerpt:

Airbnb is in the business of human connection above all else, and we believe that the most meaningful connections happen in person. Zoom is great for maintaining relationships, but it’s not the best way to deepen them. Additionally, some creative work and collaboration is best done when you’re in the same room. I’d like working at Airbnb to feel like you’re working at one of the most creative places on Earth, and this will only happen with some in-person collaboration time.

The right solution should combine the best of the digital world and the best of the physical world. It should have the efficiency of Zoom, while providing the meaningful human connection that only happens when people come together. We have a solution that we think combines the best of both worlds.

We’ve designed a way for you to live and work anywhere—while collaborating in a highly coordinated way, and experiencing the in-person connection that makes Airbnb special. Our design has five key features…

Now, a thought exercise on that item from Tom Barrett:

While you are there, extend the thought experiment and imagine the new policy for a school, college or university.

  1. You can work from home or the office
  2. You can move anywhere in the country you work in, and your compensation won’t change
  3. You have the flexibility to travel and work around the world
  4. We’ll meet up regularly for team gatherings, off-sites, and social events
  5. We’ll continue to work in a highly coordinated way

From DSC:
As a reflection on this thought experiment, this graphic comes to my mind again. Teachers, professors, trainers, staff, and students can be anywhere in the world:

Learning from the living class room

 

 

We need to use more tools — that go beyond screen sharing — where we can collaborate regardless of where we’re at. [Christian]

From DSC:
Seeing the functionality in Freehand — it makes me once again think that we need to use more tools where faculty/staff/students can collaborate with each other REGARDLESS of where they’re coming in to partake in a learning experience (i.e., remotely or physically/locally). This is also true for trainers and employees, teachers and students, as well as in virtual tutoring types of situations. We need tools that offer functionalities that go beyond screen sharing in order to collaborate, design, present, discuss, and create things.  (more…)

 

Now we just need a “Likewise TV” for learning-related resources! [Christian]

Likewise TV Brings Curation to Streaming — from lifewire.com by Cesar Aroldo-Cadenas
And it’s available on iOS, Android, and some smart TVs

All your streaming services in one place. One search. One watchlist. Socially powered recommendations.

Entertainment startup Likewise has launched a new recommendations hub that pulls from all the different streaming platforms to give you personalized picks.

Likewise TV is a streaming hub powered by machine learning, people from the Likewise community, and other streaming services. The service aims to do away with mindlessly scrolling through a menu, looking for something to watch, or jumping from one app to another by providing a single location for recommendations.

Note that Likewise TV is purely an aggregator.


Also see:

Likewise TV -- All your streaming services in one place. One search. One watchlist. Socially powered recommendations.

 


From DSC:
Now we need this type of AI-based recommendation engine, aggregator, and service for learning-related resources!

I realize that we have a long ways to go here — as a friend/former colleague of mine just reminded me that these recommendation engines often miss the mark. I’m just hoping that a recommendation engine like this could ingest our cloud-based learner profiles and our current goals and then present some promising learning-related possibilities for us. Especially if the following graphic is or will be the case in the future:


Learning from the living class room


Also relevant/see:

From DSC:
Some interesting/noteworthy features:

  • “The 32- inch display has Wi-Fi capabilities to supports multiple streaming services, can stream smartphone content, and comes with a removable SlimFit Cam.”
  • The M8 has Wi-Fi connectivity for its native streaming apps so you won’t have to connect to a computer to watch something on Netflix. And its Far Field Voice mic can be used w/ the Always On feature to control devices like Amazon Alexa with your voice, even if the monitor is off.
  • “You can also connect devices to the monitor via the SmartThings Hub, which can be tracked with the official SmartThings app.”

I wonder how what we call the TV (or television) will continue to morph in the future.


Addendum on 3/31/22 from DSC:
Perhaps people will co-create their learning playlists…as is now possible with Spotify’s “Blend” feature:

Today’s Blend update allows you to share your personal Spotify playlists with your entire group chat—up to 10 users. You can manually invite these friends and family members to join you from in the app, then Spotify will create a playlist for you all to listen to using a mixture of everyone’s music preferences. Spotify will also create a special share card that everyone in the group can use to save and share the created playlist in the future.


 

Holograms? Check! Now what? [Bieniek @ Webex]

Holograms? Check! Now what? — from blog.webex.com by Elizabeth Bieniek

Excerpt (emphasis DSC):

Two years ago, I wrote about the Future of Meetings in 2030 and hinted at an effort my team was building to make this a reality. Now, we have publicly unveiled Webex Hologram and brought the reality of a real-time, end-to-end holographic meeting solution to life.

With Webex Hologram, you can feel co-located with a colleague who is thousands of miles away. You can share real objects in incredible multi-dimensional detail and collaborate on 3D content to show perspective, share, and approve design changes in real-time, all from the comfort of your home workspace.

As the hype dies down, the focus on entirely virtual experiences in fanciful environments will abate and a resurgence in focus on augmented experiences—interjecting virtual content into the physical world around you for an enhanced experience that blends the best of physical and virtual—will emerge.

The ability to have curated information at one’s fingertips, still holds an incredible value prop that has yet to be realized. Applying AI to predict, find, and present this type of augmented information in both 2D and 3D formats will become incredibly useful. 

From DSC:
As I think of some of the categories that this posting about establishing a new kind of co-presence relates to, there are many relevant ones:

  • 21st century
  • 24x7x365
  • 3D
  • Audio/Visual (A/V)
  • Artificial Intelligence (AI)
  • Cloud-based
  • Collaboration/web-based collaboration
  • Intelligent tutoring
  • Law schools, legal, government
  • Learning, learning agents, learning ecosystems, Learning from the Living [Class] Room, learning spaces/hubs/pods
  • Libraries/librarians
  • K-12, higher education, corporate training
  • Metaverse
  • Online learning
  • Telelegal, telemedicine
  • Videoconferencing
  • Virtual courts, virtual tutoring, virtual field trips
  • Web3
 

Exploring Virtual Reality [VR] learning experiences in the classroom — from blog.neolms.com by Rachelle Dene Poth

Excerpt:

With the start of a new year, it is always a great time to explore new ideas or try some new methods that may be a bit different from what we have traditionally done. I always think it is a great opportunity to stretch ourselves professionally, especially after a break or during the spring months.

Finding ways to boost student engagement is important, and what I have found is that by using tools like Augmented Reality (AR) and Virtual Reality (VR), we can immerse students in unique and personalized learning experiences. The use of augmented and virtual reality has increased in K-12 and Higher Ed, especially during the past two years, as educators have sought new ways to facilitate learning and give students the chance to connect more with the content. The use of these technologies is increasing in the workplace, as well.

With all of these technologies, we now have endless opportunities to take learning beyond what has been a confined classroom “space” and access the entire world with the right devices.

 
© 2024 | Daniel Christian