A new immersive classroom uses AI and VR to teach Mandarin Chinese — from technologyreview.com by Karen Hao
Students will learn the language by ordering food or haggling with street vendors on a virtual Beijing street.

Excerpt:

Often the best way to learn a language is to immerse yourself in an environment where people speak it. The constant exposure, along with the pressure to communicate, helps you swiftly pick up and practice new vocabulary. But not everyone gets the opportunity to live or study abroad.

In a new collaboration with IBM Research, Rensselaer Polytechnic Institute (RPI), a university based in Troy, New York, now offers its students studying Chinese another option: a 360-degree virtual environment that teleports them to the busy streets of Beijing or a crowded Chinese restaurant. Students get to haggle with street vendors or order food, and the environment is equipped with different AI capabilities to respond to them in real time.

 

 

Microsoft’s new AI wants to help you crush your next presentation — from pcmag.com by Jake Leary
PowerPoint is receiving a slew of updates, including one that aims to help you improve your public speaking.

Excerpt:

Microsoft [on 6/18/19] announced several PowerPoint upgrades, the most notable of which is an artificial intelligence tool that aims to help you overcome pre-presentation jitters.

The Presenter Coach AI listens to you practice and offers real-time feedback on your pace, word choice, and more. It will, for instance, warn you if you’re using filler words like “umm” and “ahh,” profanities, non-inclusive language, or reading directly from your slides. At the end of your rehearsal, it provides a report with tips for future attempts. Presenter Coach arrives later this summer.

 

5G and the tactile internet: what really is it? — from techradar.com by Catherine Ellis
With 5G, we can go beyond audio and video, communicating through touch

Excerpt:

However, the speed and capacity of 5G also opens up a wealth of new opportunities with other connected devices, including real-time interaction in ways that have never been possible before.

One of the most exciting of these is tactile, or haptic communication – transmitting a physical sense of touch remotely.

 

Virtual reality helping those with developmental disabilities — from kivitv.com by Matt Sizemore
How VR is making autistic individuals more independent

Excerpts:

EAGLE, IDAHO — If you think modern day virtual reality is just for gaming, think again. New technology is helping those with developmental disabilities do more than ever before, and much of that can be found right in their own backyard.

“I see unlimited potential inside of their minds. I see us being able to unlock a certain person who can achieve things that we never thought could be done, and all of this could happen off of just exposing them to virtual reality,” said Smythe.

VR1 and the Autism XR Institute are constantly creating tools and ideas to help kids and adults with autism live a more independent life through virtual reality.

 

Also see:

Making a final wish comes true: Hospice expanding virtual reality therapy — from galioninquirer.com by Russell Kent

Excerpt:

ASHLAND — Hospice of North Central Ohio is extending its virtual reality therapy (VRT) in an effort to help Richland County hospice and palliative patients fulfill their last wishes, thanks to a $7,000 grant from the Robert and Esther Black Family Foundation Fund of The Richland County Foundation.

VRT uses video technology to generate realistic 360-degree, photographic or animated three-dimensional images, accompanied by sounds from the actual environment. When donning the headset and headphones, viewers are surrounded by visuals and sounds that give the impression of being physically present in the environment. Virtual reality therapy treatment allows patients to relive memories, return to places of emotional significance, or experience something or somewhere that they desire.

 

From DSC:
Pastors, what do you think of these ideas?

  • Summarize your key points and put them up on slides at the end of your sermons (and/or at discussion groups after service)
  • Summarize your key points and post them to the churches’ websites — including links to resources that you referenced in your sermons (books, devotions, other)
  • Have an app that folks in your congregation could complete during the sermon (like “fill in the blanks” / missing words or phrases). Or, if you’d prefer that your congregation not have their smartphones out, perhaps you could provide “quizzes” mid-week to assist in information recall (i.e., spaced repetition). That is, people would need to try to fill in the missing phrases and/or words mid-week. Answers would be immediately available if someone asked for them.

Along these lines…should there be more classes in seminary on learning theories and on pedagogy? Hmmm….an interesting thought.

 

Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

From DSC:
First of all, an article:

The four definitive use cases for AR and VR in retail — from forbes.com by Nikki Baird

AR in retail

Excerpt (emphasis DSC):

AR is the go-to engagement method of choice when it comes to product and category exploration. A label on a product on a shelf can only do so much to convey product and brand information, vs. AR, which can easily tap into a wealth of digital information online and bring it to life as an overlay on a product or on the label itself.

 

From DSC:
Applying this concept to the academic world…what might this mean for a student in a chemistry class who has a mobile device and/or a pair of smart goggles on and is working with an Erlenmeyer flask? A burette? A Bunsen burner?

Along these lines...what if all of those confused students — like *I* was struggling through chem lab — could see how an experiment was *supposed to be done!?*

That is, if there’s only 30 minutes of lab time left, the professor or TA could “flip a switch” to turn on the AR cloud within the laboratory space to allow those struggling students to see how to do their experiment.

I can’t tell you how many times I was just trying to get through the lab — not knowing what I was doing, and getting zero help from any professor or TA. I hardly learned a thing that stuck with me…except the names of a few devices and the abbreviations of a few chemicals. For the most part, it was a waste of money. How many students experience this as well and feel like I did?

Will the terms “blended learning” and/or “hybrid learning” take on whole new dimensions with the onset of AR, MR, and VR-related learning experiences?

#IntelligentTutoring #IntelligentSystems #LearningExperiences
#AR #VR #MR #XR #ARCloud #AssistiveTechnologies
#Chemistry #BlendedLearning #HybridLearning #DigitalLearning

 

Also see:

 

“It is conceivable that we’re going to be moving into a world without screens, a world where [glasses are] your screen. You don’t need any more form factor than [that].”

(AT&T CEO)

 

 

Introducing several new ideas to provide personalized, customized learning experiences for all kinds of learners! [Christian]

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Affordable and at-scale — from insidehighered.com by Ray Schroeder
Affordable degrees at scale have arrived. The momentum behind this movement is undeniable, and its impact will be significant, Ray Schroeder writes.

Excerpt (emphasis DSC):

How many times have we been told that major change in our field is on the near horizon? Too many times, indeed.

The promises of technologies and practices have fallen short more often than not. Just seven years ago, I was part of the early MOOC movement and felt the pulsating potential of teaching thousands of students around the world in a single class. The “year of the MOOC” was declared in 2012. Three years later, skeptics declared that the MOOC had died an ignominious death with high “failure” rates and relatively little recognition by employers.

However, the skeptics were too impatient, misunderstood the nature of MOOCs and lacked the vision of those at Georgia Tech, the University of Illinois, Arizona State University, Coursera, edX and scores of other institutions that have persevered in building upon MOOCs’ premises to develop high-quality, affordable courses, certificates and now, degrees at scale.

No, these degrees are not free, but they are less than half the cost of on-campus versions. No, they are not massive in the hundreds of thousands, but they are certainly at large scale with many thousands enrolled. In computer science, the success is felt across the country.

 

Georgia Tech alone has enrolled 10,000 students over all in its online master’s program and is adding thousands of new students each semester in a top 10-ranked degree program costing less than $7,000. Georgia Tech broke the new ground through building collaborations among several partners. Yet, that was just the beginning, and many leading universities have followed.

 

 

Also see:

Trends for the future of education with Jeff Selingo — from steelcase.com
How the future of work and new technology will make place more important than ever.

Excerpt:

Selingo sees artificial intelligence and big data as game changers for higher education. He says AI can free up professors and advisors to spend more time with students by answering some more frequently-asked questions and handling some of the grading. He also says data can help us track and predict student performance to help them create better outcomes. “When they come in as a first-year student, we can say ‘People who came in like you that had similar high school grades and took similar classes ended up here. So, if you want to get out of here in four years and have a successful career, here are the different pathways you should follow.’”

 

 

 

NEW: The Top Tools for Learning 2018 [Jane Hart]

The Top Tools for Learning 2018 from the 12th Annual Digital Learning Tools Survey -- by Jane Hart

 

The above was from Jane’s posting 10 Trends for Digital Learning in 2018 — from modernworkplacelearning.com by Jane Hart

Excerpt:

[On 9/24/18],  I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.

I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:

  • Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.
  • Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.
  • Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.

 

3 – Web courses are increasing in popularity.
Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.

 

 

5 – Learning at work is becoming personal and continuous.
The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at  30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.

 

 

From DSC:
Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

Assistive technology to help students with developmental delays succeed academically — from thetechedvocate.org by Matthew Lynch

Excerpt:

Developmental delays can affect almost every area of a child’s life. This broad issue can cover any possible milestone that a child doesn’t meet according to the expected timeline, including speech or movement. While children with developmental delays can still be successful, it will require some additional help from patient teachers. Educators would do well to research the available assistive technology that can help to bolster a child’s education and encourage academic success.

What tools are available to help students compensate for their developmental delays? Here are just a few of the top technologies that parents and teachers have found to be successful in the classroom.

 

 

Assistive technology to help students with articulation disorder succeed academically — from thetechedvocate.org by Matthew Lynch

Excerpt:

Some students encounter extraordinary challenges when it comes to forming the sounds of everyday communication. This may be due to a structural problem with the mouth or a motor-based issue. Collectively, these difficulties are considered to be articulation disorders. They can make classroom education extremely hard for both teachers and students. However, there are some ways that teachers can help students with articulation disorders still succeed academically.

If you want to help your student with articulation disorder succeed, you will need some of the best assistive technology available. You can see the recommendations for the top assistive technologies used with this disorder below.

 

 

 

From DSC:
Why aren’t we further along with lecture recording within K-12 classrooms?

That is, I as a parent — or much better yet, our kids themselves who are still in K-12 — should be able to go online and access whatever talks/lectures/presentations were given on a particular day. When our daughter is sick and misses several days, wouldn’t it be great for her to be able to go out and see what she missed? Even if we had the time and/or the energy to do so (which we don’t), my wife and I can’t present this content to her very well. We would likely explain things differently — and perhaps incorrectly — thus, potentially muddying the waters and causing more confusion for our daughter.

There should be entry level recording studios — such as the One Button Studio from Penn State University — in each K-12 school for teachers to record their presentations. At the end of each day, the teacher could put a checkbox next to what he/she was able to cover that day. (No rushing intended here — as education is enough of a run-away train often times!) That material would then be made visible/available on that day as links on an online-based calendar. Administrators should pay teachers extra money in the summer times to record these presentations.

Also, students could use these studios to practice their presentation and communication skills. The process is quick and easy:

 

 

 

 

I’d like to see an option — ideally via a brief voice-driven Q&A at the start of each session — that would ask the person where they wanted to put the recording when it was done: To a thumb drive, to a previously assigned storage area out on the cloud/Internet, or to both destinations?

Providing automatically generated close captioning would be a great feature here as well, especially for English as a Second Language (ESL) students.

 

 

 

From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?

 

 

From page 45 of the PDF available here:

 

Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?

 

 

 
© 2024 | Daniel Christian