From DSC:
- We can now type in text to get graphics and artwork.
- We can now type in text to get videos.
- There are several tools to give us transcripts of what was said during a presentation.
- We can search videos for spoken words and/or for words listed within slides within a presentation.
Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.
This raises some ideas/questions for me:
- What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
- What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
- Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
- How might this type of thing impact storytelling?
- Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
- What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
- Will this kind of thing be standard in the next version of the Internet (Web3)?
- Will this help people with special needs — and way beyond accessibility-related needs?
- Will data be next (instead of typing in text)?
Hmmm….interesting times ahead.