From DSC:
When reading the abstract of the article/research entitled, “Does Telemedicine Reduce Emergency Room Congestion? Evidence from New York State,” I wondered again:

Will the growth of telemedicine/telehealth influence the growth of telelegal?

I think it will.

We show that, on average, telemedicine availability in the ER significantly reduces average patients’ length of stay (LOS), which is partially driven by the flexible resource allocation. Specifically, the adoption of telemedicine leads to a larger reduction in ER LOS when there is a demand surge or supply shortage.

Also see:

Holopatient Remote Uses AR Holograms For Hands-On Medical Training -

 

Care over IP

 

Information re: virtual labs from the Online Learning Consortium


 7 Things You Should Know About Virtual Labs — from library.educause.edu

Excerpt:

Virtual labs are interactive, digital simulations of activities that typically take place in physical laboratory settings. Virtual labs simulate the tools, equipment, tests, and procedures used in chemistry, biochemistry, physics, biology, and other disciplines. Virtual labs allow students to participate in lab-based learning exercises without the costs and limitations of a physical lab. Virtual labs can be an important element in institutional efforts to expand access to lab-based courses to more and different groups of students, as well as efforts to establish contingency plans for natural disasters or other interruptions of campus activities.

 


Addendum on 8/27/20:

 

 

“Existing meeting interfaces had been designed with a singular goal, to simply enable virtual conversations. How could we build a meeting interface from the ground-up that intentionally facilitates engaging, productive, and inclusive conversations?”

 

What will tools like Macro.io bring to the online-based learning table?!

 

 

Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 

10 ways COVID-19 could change office design — from weforum.org by Harry Kretchmer

“Think road markings, but for offices. From squash-court-style lines in lobbies to standing spots in lifts, and from circles around desks to lanes in corridors, the floors and walls of our offices are likely to be covered in visual instructions.”

 

From DSC:
After reading the above article and quote, I wondered..rather than marking up the floors and walls of spaces, perhaps Augmented Reality (AR) will provide such visual instructions for navigating office spaces in the future.  Then I wondered about other such spaces as:

  • Learning spaces
  • Gyms
  • Retail outlets 
  • Grocery stores
  • Restaurants
  • Small businesses
  • Other common/public spaces
 

5 good tools to create whiteboard animations — from educatorstechnology.com

Excerpt:

In short, whiteboard animation (also called video scribing or animated doodling) is a video clip in which the recorder records the process of drawing on a whiteboard while using audio comment. The final result is a beautiful synchronization of the drawings and the audio feedback. In education, whiteboard animation videos  are used in language teaching/learning, in professional development sessions, to create educational tutorials and presentations and many more. In today’s post, we are sharing with you some good web tools you can use to create whiteboard animation videos.

 

 

 

45% of ORs will be integrated with artificial intelligence by 2022 – from healthitanalytics.com by Jessica Kent
Operating rooms will become infused with artificial intelligence in the coming years, with interoperability and partnerships fueling growth.

Excerpt:

Thirty-five percent to 45 percent of operating rooms (ORs) in the US and beyond will become integrated with artificial intelligence and virtual reality technologies by 2022, according to a recent Frost & Sullivan analysis.

AI, virtual reality, and other advanced tools will enable ORs to use intelligent and efficient delivery options to improve care precision. Robotic-assisted surgery devices (RASDs) will play a key role in driving the $4.5 billion US and European hospital and OR products and solutions market to $7.04 billion by 2022, the analysis said.

 

Data visualization via VR and AR: How we’ll interact with tomorrow’s data — from zdnet.comby Greg Nichols

Excerpt:

But what if there was a way to visualize huge data sets that instantly revealed important trends and patterns? What if you could interact with the data, move it around, literally walk around it? That’s one of the lesser talked about promises of mixed reality. If developers can deliver on the promise, it just may be one of the most important enterprise applications of those emerging technologies, as well.

 

In the future, we may be using MR to walk around data and to better visualize data

 

Also see:

 

 

Philips, Microsoft Unveils Augmented Reality Concept for Operating Room of the Future — from hitconsultant.net by Fred Pennic

Excerpt:

Health technology company Philips unveiled a unique mixed reality concept developed together with Microsoft Corp. for the operating room of the future. Based on the state-of-the-art technologies of Philips’Azurion image-guided therapy platform and Microsoft’s HoloLens 2 holographic computing platform, the companies will showcase novel augmented reality applications for image-guided minimally invasive therapies.

 

 

 

 

From DSC:
Thanks to Mike Matthews for his item on LinkedIn for this.

 

 

 

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

‘You can see what you can’t imagine’: Local students, professors helping make virtual reality a reality — from omaha.com and Creighton University

Excerpt:

“You can see what you can’t imagine,” said Aaron Herridge, a graduate student in Creighton’s medical physics master’s program and a RaD Lab intern who is helping develop the lab’s virtual reality program. “It’s an otherworldly experience,” Herridge says. “But that’s the great plus of virtual reality. It can take you places that you couldn’t possibly go in real life. And in physics, we always say that if you can’t visualize it, you can’t do the math. It’s going to be a huge educational leap.”

 

“We’re always looking for ways to help students get the real feeling for astronomy,” Gabel said. “Visualizing space from another planet, like Mars, or from Earth’s moon, is a unique experience that goes beyond pencil and paper or a two-dimensional photograph in a textbook.

 

 

BAE created a guided step-by-step training solution for HoloLens to teach workers how to assemble a green energy bus battery.

From DSC:
How long before items that need some assembling come with such experiences/training-related resources?

 

 

 

VR and AR: The Ethical Challenges Ahead — from er.educause.edu by Emory Craig and Maya Georgieva
Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning to imagine.

Excerpt:

As immersive technologies become ever more realistic with graphics, haptic feedback, and social interactions that closely align with our natural experience, we foresee the ethical debates intensifying. What happens when the boundaries between the virtual and physical world are blurred? Will VR be a tool for escapism, violence, and propaganda? Or will it be used for social good, to foster empathy, and as a powerful new medium for learning?

 

 

Google Researchers Have Developed an Augmented Reality Microscope for Detecting Cancer — from next.reality.news by Tommy Palladino

Excerpt:

Augmented reality might not be able to cure cancer (yet), but when combined with a machine learning algorithm, it can help doctors diagnose the disease. Researchers at Google have developed an augmented reality microscope (ARM) that takes real-time data from a neural network trained to detect cancerous cells and displays it in the field of view of the pathologist viewing the images.

 

 

Sherwin-Williams Uses Augmented Reality to Take the Guesswork Out of Paint Color Selection — from next.reality.news by Tommy Palladino

 

 

 

 

 

From DSC:
I vote that we change the color that we grade papers — whether on paper (harcopy) or whether via digitally/electronically-based annotations — from red to green. Why? Because here’s how I see the colors:

  • RED:
    • Failure. 
    • You got it wrong. Bad job.
    • Danger
    • Stop!
    • Can be internalized as, “I’m no good at (writing, math, social studies, science, etc…..) and I’ll never be any good at it (i.e., the fixed mindset; I was born this way and I can’t change things).
  • GREEN:
    • Growth
      • As in spring, flowers appearing, new leaves on the trees, new life
      • As in support of a growth mindset
      • It helps with more positive thoughts/internalized messages: I may have got it wrong, but I can use this as a teaching moment; this feedback helps me grow…it helps me identify my knowledge and/or skills gaps
    • Health
    • Go (not stop); i.e., keep going, keep learning
    • May help develop more of a love of learning (or at least have more positive experiences with learning, vs feeling threatened or personally put down)

 

 

 

 

Click on the image to get a larger image in a PDF file format.

 


From DSC:
So regardless of what was being displayed up on any given screen at the time, once a learner was invited to use their devices to share information, a graphical layer would appear on the learner’s mobile device — as well as up on the image of the screens (but the actual images being projected on the screens would be shown in the background in a muted/pulled back/25% opacity layer so the code would “pop” visually-speaking) — letting him or her know what code to enter in order to wirelessly share their content up to a particular screen. This could be extra helpful when you have multiple screens in a room.

For folks at Microsoft: I could have said Mixed Reality here as well.


 

#ActiveLearning #AR #MR #IoT #AV #EdTech #M2M #MobileApps
#Sensors #Crestron #Extron #Projection #Epson #SharingContent #Wireless

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2020 | Daniel Christian