From DSC:
As we move into 2021, the blistering pace of emerging technologies will likely continue. Technologies such as:
- Artificial Intelligence (AI) — including technologies related to voice recognition
- Blockchain
- Augment Reality (AR)/Mixed Reality (MR)/Virtual Reality (VR) and/or other forms of Extended Reality (XR)
- Robotics
- Machine-to-Machine Communications (M2M) / The Internet of Things (IoT)
- Drones
- …and other things will likely make their way into how we do many things (for better or for worse).
Along the positive lines of this topic, I’ve been reflecting upon how we might be able to use AI in our learning experiences.
For example, when teaching in face-to-face-based classrooms — and when a lecture recording app like Panopto is being used — could teachers/professors/trainers audibly “insert” main points along the way? Similar to something like we do with Siri, Alexa, and other personal assistants (“Heh Siri, _____ or “Alexa, _____).
.
Pretend a lecture, lesson, or a training session is moving right along. Then the professor, teacher, or trainer says:
- “Heh Smart Classroom, Begin Main Point.”
- Then speaks one of the main points.
- Then says, “Heh Smart Classroom, End Main Point.”
Like a verbal version of an HTML tag.
After the recording is done, the AI could locate and call out those “main points” — and create a table of contents for that lecture, lesson, training session, or presentation.
(Alternatively, one could insert a chime/bell/some other sound that the AI scans through later to build the table of contents.)
In the digital realm — say when recording something via Zoom, Cisco Webex, Teams, or another application — the same thing could apply.
Wouldn’t this be great for quickly scanning podcasts for the main points? Or for quickly scanning presentations and webinars for the main points?
Anyway, interesting times lie ahead!