Introducing several new ideas to provide personalized, customized learning experiences for all kinds of learners! [Christian]

From DSC:
I have often reflected on differentiation or what some call personalized learning and/or customized learning. How does a busy teacher, instructor, professor, or trainer achieve this, realistically?

It’s very difficult and time-consuming to do for sure. But it also requires a team of specialists to achieve such a holy grail of learning — as one person can’t know it all. That is, one educator doesn’t have the necessary time, skills, or knowledge to address so many different learning needs and levels!

  • Think of different cognitive capabilities — from students that have special learning needs and challenges to gifted students
  • Or learners that have different physical capabilities or restrictions
  • Or learners that have different backgrounds and/or levels of prior knowledge
  • Etc., etc., etc.

Educators  and trainers have so many things on their plates that it’s very difficult to come up with _X_ lesson plans/agendas/personalized approaches, etc.  On the other side of the table, how do students from a vast array of backgrounds and cognitive skill levels get the main points of a chapter or piece of text? How can they self-select the level of difficulty and/or start at a “basics” level and work one’s way up to harder/more detailed levels if they can cognitively handle that level of detail/complexity? Conversely, how do I as a learner get the boiled down version of a piece of text?

Well… just as with the flipped classroom approach, I’d like to suggest that we flip things a bit and enlist teams of specialists at the publishers to fulfill this need. Move things to the content creation end — not so much at the delivery end of things. Publishers’ teams could play a significant, hugely helpful role in providing customized learning to learners.

Some of the ways that this could happen:

Use an HTML like language when writing a textbook, such as:

<MainPoint> The text for the main point here. </MainPoint>

<SubPoint1>The text for the subpoint 1 here.</SubPoint1>

<DetailsSubPoint1>More detailed information for subpoint 1 here.</DetailsSubPoint1>

<SubPoint2>The text for the subpoint 2 here.</SubPoint2>

<DetailsSubPoint2>More detailed information for subpoint 2 here.</DetailsSubPoint2>

<SubPoint3>The text for the subpoint 3 here.</SubPoint3>

<DetailsSubPoint3>More detailed information for subpoint 3 here.</DetailsSubPoint1>

<SummaryOfMainPoints>A list of the main points that a learner should walk away with.</SummaryOfMainPoints>

<BasicsOfMainPoints>Here is a listing of the main points, but put in alternative words and more basic ways of expressing those main points. </BasicsOfMainPoints>

<Conclusion> The text for the concluding comments here.</Conclusion>

 

<BasicsOfMainPoints> could be called <AlternativeExplanations>
Bottom line: This tag would be to put things forth using very straightforward terms.

Another tag would be to address how this topic/chapter is relevant:
<RealWorldApplication>This short paragraph should illustrate real world examples

of this particular topic. Why does this topic matter? How is it relevant?</RealWorldApplication>

 

On the students’ end, they could use an app that works with such tags to allow a learner to quickly see/review the different layers. That is:

  • Show me just the main points
  • Then add on the sub points
  • Then fill in the details
    OR
  • Just give me the basics via an alternative ways of expressing these things. I won’t remember all the details. Put things using easy-to-understand wording/ideas.

 

It’s like the layers of a Microsoft HoloLens app of the human anatomy:

 

Or it’s like different layers of a chapter of a “textbook” — so a learner could quickly collapse/expand the text as needed:

 

This approach could be helpful at all kinds of learning levels. For example, it could be very helpful for law school students to obtain outlines for cases or for chapters of information. Similarly, it could be helpful for dental or medical school students to get the main points as well as detailed information.

Also, as Artificial Intelligence (AI) grows, the system could check a learner’s cloud-based learner profile to see their reading level or prior knowledge, any IEP’s on file, their learning preferences (audio, video, animations, etc.), etc. to further provide a personalized/customized learning experience. 

To recap:

  • “Textbooks” continue to be created by teams of specialists, but add specialists with knowledge of students with special needs as well as for gifted students. For example, a team could have experts within the field of Special Education to help create one of the overlays/or filters/lenses — i.e., to reword things. If the text was talking about how to hit a backhand or a forehand, the alternative text layer could be summed up to say that tennis is a sport…and that a sport is something people play. On the other end of the spectrum, the text could dive deeply into the various grips a person could use to hit a forehand or backhand.
  • This puts the power of offering differentiation at the point of content creation/development (differentiation could also be provided for at the delivery end, but again, time and expertise are likely not going to be there)
  • Publishers create “overlays” or various layers that can be turned on or off by the learners
  • Can see whole chapters or can see main ideas, topic sentences, and/or details. Like HTML tags for web pages.
  • Can instantly collapse chapters to main ideas/outlines.

 

 

Skype chats are coming to Alexa devices — from engadget.com by Richard Lawlor
Voice controlled internet calls to or from any device with Amazon’s system in it.

Excerpt:

Aside from all of the Alexa-connected hardware, there’s one more big development coming for Amazon’s technology: integration with Skype. Microsoft and Amazon said that voice and video calls via the service will come to Alexa devices (including Microsoft’s Xbox One) with calls that you can start and control just by voice.

 

 

Amazon Hardware Event 2018
From techcrunch.com

 

Echo HomePod? Amazon wants you to build your own — by Brian Heater
One of the bigger surprises at today’s big Amazon event was something the company didn’t announce. After a couple of years of speculation that the company was working on its own version of the Home…

 

 

The long list of new Alexa devices Amazon announced at its hardware event — by Everyone’s favorite trillion-dollar retailer hosted a private event today where they continued to…

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screensAlong with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia…

Excerpt:

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

 

From DSC:
This is a great move by Amazon — as NLP and our voices become increasingly important in how we “drive” and utilize our computing devices.

 

 

Amazon launches an Echo Wall Clock, because Alexa is gonna be everywhere — by Sarah Perez

 

 

Amazon’s new Echo lineup targets Google, Apple and Sonos — from engadget.com by Nicole Lee
Alexa, dominate the industry.

The business plan from here is clear: Companies pay a premium to be activated when users pose questions related to their products and services. “How do you cook an egg?” could pull up a Food Network tutorial; “How far is Morocco?” could enable the Expedia app.
Also see how Alexa might be a key piece of smart classrooms in the future:
 

Microsoft’s AI-powered Sketch2Code builds websites and apps from drawings — from alphr.com by Bobby Hellard
Microsoft Released on GitHub, Microsoft’s AI-powered developer tool can shave hours off web and app building

Excerpt:

Microsoft has developed an AI-powered web design tool capable of turning sketches of websites into functional HTML code.

Called Sketch2Code, Microsoft AI’s senior product manager Tara Shankar Jana explained that the tool aims to “empower every developer and every organisation to do more with AI”. It was born out of the “intrinsic” problem of sending a picture of a wireframe or app designs from whiteboard or paper to a designer to create HTML prototypes.

 

 

 

 

 

Adobe Announces the 2019 Release of Adobe Captivate, Introducing Virtual Reality for eLearning Design — from theblog.adobe.com

Excerpt:

  • Immersive learning with VR experiences: Design learning scenarios that your learners can experience in Virtual Reality using VR headsets. Import 360° media assets and add hotspots, quizzes and other interactive elements to engage your learners with near real-life scenarios
  • Interactive videos: Liven up demos and training videos by making them interactive with the new Adobe Captivate. Create your own or bring in existing YouTube videos, add questions at specific points and conduct knowledge checks to aid learner remediation
  • Fluid Boxes 2.0: Explore the building blocks of Smart eLearning design with intelligent containers that use white space optimally. Objects placed in Fluid Boxes get aligned automatically so that learners always get fully responsive experience regardless of their device or browser.
  • 360° learning experiences: Augment the learning landscape with 360° images and videos and convert them into interactive eLearning material with customizable overlay items such as information blurbs, audio content & quizzes.

 

 

Blippar unveils indoor visual positioning system to anchor AR — from martechtoday.com by Barry Levine
Employing machine vision to recognize mapped objects, the company says it can determine which way a user is looking and can calculate positioning down to a centimeter.

A Blippar visualization of AR using its new indoor visual positioning system

 

The Storyteller’s Guide to the Virtual Reality Audience — from medium.com by Katy Newton

Excerpt:

To even scratch the surface of these questions, we need to better understand the audience’s experience in VR — not just their experience of the technology, but the way that they understand story and their role within it.

 

 

Hospital introducing HoloLens augmented reality into the operating room — from medgadget.com

Excerpt:

HoloLens technology is being paired with Microsoft’s Surface Hub, a kind of digital whiteboard. The idea is that the surgical team can gather together around a Surface Hub to review patient information, discuss the details of a procedure, and select what information should be readily accessible during surgery. During the procedure, a surgeon wearing a HoloLens would be able to review a CT or MRI scan, access other data in the electronic medical records, and to be able to manipulate these so as to get a clear picture of what is being worked on and what needs to be done.

 

 

Raleigh Fire Department invests in virtual reality to enrich training — from vrfocus.com by Nikholai Koolon
New system allows department personnel to learn new skills through immersive experiences.

Excerpt:

The VR solution allows emergency medical services (EMS) personnel to dive into a rich and detailed environment which allows them to pinpoint portions of the body to dissect. This then allows them then see each part of the body in great detail along with viewing it from any angle. The goal is to allow for users to gain the experience to diagnose injuries from a variety of vantage points all where working within an virtual environment capable of displaying countless scenarios.

 

 

For another emerging technology, see:

Someday this tiny spider bot could perform surgery inside your body — from fastcompany.com by Jesus Diaz
The experimental robots could also fix airplane engines and find disaster victims.

Excerpt:

A team of Harvard University researchers recently achieved a major breakthrough in robotics, engineering a tiny spider robot using tech that could one day work inside your body to repair tissues or destroy tumors. Their work could not only change medicine–by eliminating invasive surgeries–but could also have an impact on everything from how industrial machines are maintained to how disaster victims are rescued.

Until now, most advanced, small-scale robots followed a certain model: They tend to be built at the centimeter scale and have only one degree of freedom, which means they can only perform one movement. Not so with this new ‘bot, developed by scientists at Harvard’s Wyss Institute for Biologically Inspired Engineering, the John A. Paulson School of Engineering and Applied Sciences, and Boston University. It’s built at the millimeter scale, and because it’s made of flexible materials–easily moved by pneumatic and hydraulic power–the critter has an unprecedented 18 degrees of freedom.

 


Plus some items from a few weeks ago


 

After almost a decade and billions in outside investment, Magic Leap’s first product is finally on sale for $2,295. Here’s what it’s like. — from

Excerpts (emphasis DSC):

I liked that it gave a new perspective to the video clip I’d watched: It threw the actual game up on the wall alongside the kind of information a basketball fan would want, including 3-D renderings and stats. Today, you might turn to your phone for that information. With Magic Leap, you wouldn’t have to.

Abovitz also said that intelligent assistants will play a big role in Magic Leap’s future. I didn’t get to test one, but Abovitz says he’s working with a team in Los Angeles that’s developing high-definition people that will appear to Magic Leap users and assist with tasks. Think Siri, Alexa or Google Assistant, but instead of speaking to your phone, you’d be speaking to a realistic-looking human through Magic Leap. Or you might be speaking to an avatar of someone real.

“You might need a doctor who can come to you,” Abovitz said. “AI that appears in front of you can give you eye contact and empathy.”

 

And I loved the idea of being able to place a digital TV screen anywhere I wanted.

 

 

Magic Leap One Available For Purchase, Starting At $2,295 — from vrscout.com by Kyle Melnick

Excerpt:

December of last year U.S. startup Magic Leap unveiled its long-awaited mixed reality headset, a secretive device five years and $2.44B USD in the making.

This morning that same headset, now referred to as the Magic Leap One Creator Edition, became available for purchase in the U.S. On sale to creators at a hefty starting price of $2,275, the computer spatial device utilizes synthetic lightfields to capture natural lightwaves and superimpose interactive, 3D content over the real-world.

 

 

 

Magic Leap One First Hands-On Impressions for HoloLens Developers — from magic-leap.reality.news

Excerpt:

After spending about an hour with the headset running through set up and poking around its UI and a couple of the launch day apps, I thought it would be helpful to share a quick list of some of my first impressions as someone who’s spent a lot of time with a HoloLens over the past couple years and try to start answering many of the burning questions I’ve had about the device.

 

 

World Campus researches effectiveness of VR headsets and video in online classes — from news.psu.edu

Excerpt:

UNIVERSITY PARK, Pa. — Penn State instructional designers are researching whether using virtual reality and 360-degree video can help students in online classes learn more effectively.

Designers worked with professors in the College of Nursing to incorporate 360-degree video into Nursing 352, a class on Advanced Health Assessment. Students in the class, offered online through Penn State World Campus, were offered free VR headsets to use with their smartphones to create a more immersive experience while watching the video, which shows safety and health hazards in a patient’s home.

Bill Egan, the lead designer for the Penn State World Campus RN to BSN nursing program, said students in the class were surveyed as part of a study approved by the Institutional Review Board and overwhelmingly said that they enjoyed the videos and thought they provided educational value. Eighty percent of the students said they would like to see more immersive content such as 360-degree videos in their online courses, he said.

 

 

7 Practical Problems with VR for eLearning — from learnupon.com

Excerpt:

In this post, we run through some practical stumbling blocks that prevent VR training from being feasible for most.

There are quite a number of practical considerations which prevent VR from totally overhauling the corporate training world. Some are obvious, whilst others only become apparent after using the technology a number of times. It’s important to be made aware of these limitations so that a large investment isn’t made in tech that isn’t really practical for corporate training.

 

Augmented reality – the next big thing for HR? — from hrdconnect.com
Augmented reality (AR) could have a huge impact on HR, transforming long-established processes into engaging and exciting something. What will this look like? How can we shape this into our everyday working lives?

Excerpt (emphasis DSC):

AR also has the potential to revolutionise our work lives, changing the way we think about office spaces and equipment forever.

Most of us still commute to an office every day, which can be a time-consuming and stressful experience. AR has the potential to turn any space into your own customisable workspace, complete with digital notes, folders and files – even a digital photo of your loved ones. This would give you access to all the information and tools that you would typically find in an office, but wherever and whenever you need them.

And instead of working on a flat, stationary, two-dimensional screen, your workspace would be a customisable three-dimensional space, where objects and information are manipulated with gestures rather than hardware. All you would need is an AR headset.

AR could also transform the way we advertise brands and share information. Imagine if your organisation had an AR stand at a conference – how engaging would that be for potential customers? How much more interesting and fun would meetings be if we used AR to present information instead of slides on a projector?

AR could transform the on-boarding experience into something fun and interactive – imagine taking an AR tour of your office, where information about key places, company history or your new colleagues pops into view as you go from place to place. 

 

 

RETINA Are Bringing Augmented Reality To Air Traffic Control Towers — from vrfocus.com by Nikholai Koolonavi

Excerpt:

A new project is aiming to make it easier for staff in airport control towers to visualize information to help make their job easier by leveraging augmented reality (AR) technology. The project, dubbed RETINA, is looking to modernise Europe’s air traffic management for safer, smarter and even smoother air travel.

 

 

 

25 skills LinkedIn says are most likely to get you hired in 2018 — and the online courses to get them — from businessinsider.com by Mara Leighton

Excerpt:

With the introduction of far-reaching and robust technology, the job market has experienced its own exponential growth, adaptation, and semi-metamorphosis. So much so that it can be difficult to guess what skills employer’s are looking for and what makes your résumé — and not another — stand out to recruiters.

Thankfully, LinkedIn created a 2018 “roadmap”— a list of hard and soft skills that companies need the most.

LinkedIn used data from their 500+ million members to identify the skills companies are currently working the hardest to fill. They grouped the skills members add to their profiles into several dozen categories (for example, “Android” and “iOS” into the “Mobile Development” category). Then, the company looked at all of the hiring and recruiting activity that happened on LinkedIn between January 1 and September 1 (billions of data points) and extrapolated the skill categories that belonged to members who were “more likely to start a new role within a company and receive interest from companies.”

LinkedIn then coupled those specific skills with related jobs and their average US salaries — all of which you can find below, alongside courses you can take (for free or for much less than the cost of a degree) to support claims of aptitude and stay ahead of the curve.

The online-learning options we included — LinkedIn Learning, Udemy, Coursera, and edX— are among the most popular and inexpensive.

 

 

Also see:

 

 

 

Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

Can we design online learning platforms that feel more intimate than massive? — from edsurge.com by Amy Ahearn

Excerpt:

This presents a challenge and an opportunity: How can we design online learning environments that achieve scale and intimacy? How do we make digital platforms feel as inviting as well-designed physical classrooms?

The answer may be that we need to balance massiveness with miniaturization. If the first wave of MOOCs was about granting unprecedented numbers of students access to high-quality teaching and learning materials, Wave 2 needs to focus on creating a sense of intimacy within that massiveness.

We need to be building platforms that look less like a cavernous stadium and more like a honeycomb. This means giving people small chambers of engagement where they can interact with a smaller, more manageable and yet still diverse groups. We can’t meaningfully listen to the deafening roar of the internet. But we can learn from a collection of people with perspectives different than ours.

 

 

What will it take to get MOOC platforms to begin to offer learning spaces that feel more inviting and intimate? Perhaps there’s a new role that needs to emerge in the online learning ecosystem: a “learning architect” who sits between the engineers and the instructional designers.

 

 

 

 

 

 

Computing in the Camera — from blog.torch3d.com by Paul Reynolds
Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create.

One of the points Allison [Woods, CEO, Camera IQ] made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it. Simple, obvious, straightforward, but not earth shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.

 

“[T]he camera will bring the internet and the real world into a single time and space.”

— Allison Woods, CEO, Camera IQ

 

 

The Camera As Platform — from shift.newco.co by Allison Wood
When the operating system moves to the viewfinder, the world will literally change

“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”

– Jacob Mullins, Shasta Ventures

 

 

 

The State Of The ARt At AWE 18 — from forbes.com by Charlie Fink

Excerpt:

The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Use cases abounded.

After traveling the country and most recently to Europe, I’ve now experienced almost every major VR/AR/MR/XR related conference out there. AWE’s exhibit area was by far the largest display of VR and AR companies to date (with the exception of CES).

 

AR is being used to identify features and parts within cars

 

 

 

 

Student Learning and Virtual Reality: The Embodied Experience — from er.educause.edu by Jaime Hannans, Jill Leafstedt and Talya Drescher

Excerpts:

Specifically, we explored the potential for how virtual reality can help create a more empathetic nurse, which, we hypothesize, will lead to increased development of nursing students’ knowledge, skills, and attitudes. We aim to integrate these virtual experiences into early program coursework, with the intent of changing nursing behavior by providing a deeper understanding of the patient’s perspective during clinical interactions.

In addition to these compelling student reflections and the nearly immediate change in reporting practice, survey findings show that students unanimously felt that this type of patient-perspective VR experience should be integrated and become a staple of the nursing curriculum. Seeing, hearing, and feeling these moments results in significant and memorable learning experiences compared to traditional classroom learning alone. The potential that this type of immersive experience can have in the field of nursing and beyond is only limited by the imagination and creation of other virtual experiences to explore. We look forward to continued exploration of the impact of VR on student learning and to establishing ongoing partnerships with developers.

 

Also see:

 

 

 

‘You can see what you can’t imagine’: Local students, professors helping make virtual reality a reality — from omaha.com and Creighton University

Excerpt:

“You can see what you can’t imagine,” said Aaron Herridge, a graduate student in Creighton’s medical physics master’s program and a RaD Lab intern who is helping develop the lab’s virtual reality program. “It’s an otherworldly experience,” Herridge says. “But that’s the great plus of virtual reality. It can take you places that you couldn’t possibly go in real life. And in physics, we always say that if you can’t visualize it, you can’t do the math. It’s going to be a huge educational leap.”

 

“We’re always looking for ways to help students get the real feeling for astronomy,” Gabel said. “Visualizing space from another planet, like Mars, or from Earth’s moon, is a unique experience that goes beyond pencil and paper or a two-dimensional photograph in a textbook.

 

 

BAE created a guided step-by-step training solution for HoloLens to teach workers how to assemble a green energy bus battery.

From DSC:
How long before items that need some assembling come with such experiences/training-related resources?

 

 

 

VR and AR: The Ethical Challenges Ahead — from er.educause.edu by Emory Craig and Maya Georgieva
Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning to imagine.

Excerpt:

As immersive technologies become ever more realistic with graphics, haptic feedback, and social interactions that closely align with our natural experience, we foresee the ethical debates intensifying. What happens when the boundaries between the virtual and physical world are blurred? Will VR be a tool for escapism, violence, and propaganda? Or will it be used for social good, to foster empathy, and as a powerful new medium for learning?

 

 

Google Researchers Have Developed an Augmented Reality Microscope for Detecting Cancer — from next.reality.news by Tommy Palladino

Excerpt:

Augmented reality might not be able to cure cancer (yet), but when combined with a machine learning algorithm, it can help doctors diagnose the disease. Researchers at Google have developed an augmented reality microscope (ARM) that takes real-time data from a neural network trained to detect cancerous cells and displays it in the field of view of the pathologist viewing the images.

 

 

Sherwin-Williams Uses Augmented Reality to Take the Guesswork Out of Paint Color Selection — from next.reality.news by Tommy Palladino

 

 

 

 

 

 

Click on the image to get a larger image in a PDF file format.

 


From DSC:
So regardless of what was being displayed up on any given screen at the time, once a learner was invited to use their devices to share information, a graphical layer would appear on the learner’s mobile device — as well as up on the image of the screens (but the actual images being projected on the screens would be shown in the background in a muted/pulled back/25% opacity layer so the code would “pop” visually-speaking) — letting him or her know what code to enter in order to wirelessly share their content up to a particular screen. This could be extra helpful when you have multiple screens in a room.

For folks at Microsoft: I could have said Mixed Reality here as well.


 

#ActiveLearning #AR #MR #IoT #AV #EdTech #M2M #MobileApps
#Sensors #Crestron #Extron #Projection #Epson #SharingContent #Wireless

 

 

From DSC:
This application looks to be very well done and thought out! Wow!

Check out the video entitled “Interactive Ink – Enables digital handwriting — and you may also wonder whether this could be a great medium/method of having to “write things down” for better information processing in our minds, while also producing digital work for easier distribution and sharing!

Wow!  Talk about solid user experience design and interface design! Nicely done.

 

 

Below is an excerpt of the information from Bella Pietsch from anthonyBarnum Public Relations

Imagine a world where users interact with their digital devices seamlessly, and don’t suffer from lag and delayed response time. I work with MyScript, a company whose Interactive Ink tech creates that world of seamless handwritten interactivity by combining the flexibility of pen and paper with the power and productivity of digital processing.

According to a recent forecast, the global handwriting recognition market is valued at a trillion-plus dollars and is expected to grow at an almost 16 percent compound annual growth rate by 2025. To add additional context, the new affordable iPad with stylus support was just released, allowing users to work with the $99 Apple Pencil, which was previously only supported by the iPad Pro.

Check out the demo of Interactive Ink using an Apple Pencil, Microsoft Surface Pen, Samsung S Pen or Google Pixelbook Pen here.

Interactive Ink’s proficiencies are the future of writing and equating. Developed by MyScript Labs, Interactive Ink is a form of digital ink technology which allows ink editing via simple gestures and providing device reflow flexibility. Interactive Ink relies on real-time predictive handwriting recognition, driven by artificial intelligence and neural network architectures.

 

 

 

 

Design Thinking: A Quick Overview — from interaction-design.org by Rikke Dam and Teo Siang

Excerpt:

To begin, let’s have a quick overview of the fundamental principles behind Design Thinking:

  • Design Thinking starts with empathy, a deep human focus, in order to gain insights which may reveal new and unexplored ways of seeing, and courses of action to follow in bringing about preferred situations for business and society.
  • It involves reframing the perceived problem or challenge at hand, and gaining perspectives, which allow a more holistic look at the path towards these preferred situations.
  • It encourages collaborative, multi-disciplinary teamwork to leverage the skills, personalities and thinking styles of many in order to solve multifaceted problems.
  • It initially employs divergent styles of thinking to explore as many possibilities, deferring judgment and creating an open ideations space to allow for the maximum number of ideas and points of view to surface.
  • It later employs convergent styles of thinking to isolate potential solution streams, combining and refining insights and more mature ideas, which pave a path forward.
  • It engages in early exploration of selected ideas, rapidly modelling potential solutions to encourage learning while doing, and allow for gaining additional insight into the viability of solutions before too much time or money has been spent
  • Tests the prototypes which survive the processes further to remove any potential issues.
  • Iterates through the various stages, revisiting empathetic frames of mind and then redefining the challenge as new knowledge and insight is gained along the way.
  • It starts off chaotic and cloudy steamrolling towards points of clarity until a desirable, feasible and viable solution emerges.

 

 

From DSC:
This post includes information about popular design thinking frameworks. I think it’s a helpful posting for those who have heard about design thinking but want to know more about it.

 

 

What is Design Thinking?
Design thinking is an iterative process in which we seek to understand the user, challenge assumptions we might have, and redefine problems in an attempt to identify alternative strategies and solutions that might not be instantly apparent with our initial level of understanding. As such, design thinking is most useful in tackling problems that are ill-defined or unknown.

Design thinking is extremely useful in tackling ill-defined or unknown problems—it reframes the problem in human-centric ways, allows the creation of many ideas in brainstorming sessions, and lets us adopt a hands-on approach in prototyping and testing. Design thinking also involves on-going experimentation: sketching, prototyping, testing, and trying out concepts and ideas. It involves five phases: Empathize, Define, Ideate, Prototype, and Test. The phases allow us to gain a deep understanding of users, critically examine the assumptions about the problem and define a concrete problem statement, generate ideas for tackling the problem, and then create prototypes for the ideas in order to test their effectiveness.

Design thinking is not about graphic design but rather about solving problems through the use of design. It is a critical skill for allprofessionals, not only designers. Understanding how to approach problems and apply design thinking enables everyone to maximize our contributions in the work environment and create incredible, memorable products for users.

 

 

 

 

Embracing Digital Tools of the Millennial Trade. — from virtuallyinspired.org

Excerpt:

Thus, millennials are well-acquainted with – if not highly dependent on – the digital tools they use in their personal and professional lives. Tools that empower them to connect and collaborate in a way that is immediate and efficient, interactive and self-directed. Which is why they expect technology-enhanced education to replicate this user experience in the virtual classroom. And when their expectations fall short or go unmet altogether, millennials are more likely to go in search of other alternatives.

 

 

From DSC:
There are several solid tools mentioned in this article, and I always appreciate the high-level of innovation arising from Susan Aldridge, Marci Powell, and the folks at virtuallyinspired.org.

After reading the article, the key considerations that come to my mind involve the topics of usability and advocating for the students’ perspective. That is, we need to approach things from the student’s/learner’s standpoint — from a usability and user experience standpoint. For example, a seamless/single sign-on for each of these tools would be a requirement for implementing them. Otherwise, learners would have to be constantly logging into a variety of systems and services. Not only is that process time consuming, but a learner would need to keep track of additional passwords — and who doesn’t have enough of those to keep track of these days (I realize there are tools for that, but even those tools require additional time to investigate, setup, and maintain).

So plug-ins for the various CMSs/LMSs are needed that allow for a nice plug-and-play situation here.

 

 

Virtual reality technology enters a Chinese courtroom — from supchina.com by Jiayun Feng

Excerpt:

The introduction of VR technology is part of a “courtroom evidence visualization system” developed by the local court. The system also includes a newly developed computer program that allows lawyers to present evidence with higher quality and efficiency, which will replace a traditional PowerPoint slideshow.

It is reported that the system will soon be implemented in courtrooms across the city of Beijing.

 

 

 

Watch Waymo’s Virtual-Reality View of the World — from spectrum.ieee.org by Philip Ross

From DSC:
This is mind blowing. Now I see why Nvidia’s products/services are so valuable.

 

 

Along these same lines, also see this clip and/or this article entitled, This is why AR and Autonomous Driving are the Future of Cars:

 

 

 

The Legal Hazards of Virtual Reality and Augmented Reality Apps — from spectrum.ieee.org by Tam Harbert
Liability and intellectual property issues are just two areas developers need to know about

Excerpt:

As virtual- and augmented-reality technologies mature, legal questions are emerging that could trip up VR and AR developers. One of the first lawyers to explore these questions is Robyn Chatwood, of the international law firm Dentons. “VR and AR are areas where the law is just not keeping up with [technology] developments,” she says. IEEE Spectrum contributing editor Tam Harbert talked with Chatwood about the legal challenges.

 

 

 

This VR Tool Could Make Kids A Lot Less Scared Of Medical Procedures — from fastcompany.com by Daniel Terdiman
The new app creates a personalized, explorable 3D model of a kid’s own body that makes it much easier for them to understand what’s going on inside.

Excerpt:

A new virtual reality app that’s designed to help kids suffering from conditions like Crohn’s disease understand their maladies immerses those children in a cartoon-like virtual reality tour through their body.

Called HealthVoyager, the tool, a collaboration between Boston Children’s Hospital and the health-tech company Klick Health, is being launched today at an event featuring former First Lady Michelle Obama.

A lot of kids are confused by doctors’ intricate explanations of complex procedures like a colonoscopy, and they, and their families, can feel much more engaged, and satisfied, if they really understand what’s going on. But that’s been hard to do in a way that really works and doesn’t get bogged down with a lot of meaningless jargon.

 

 

Augmented Reality in Education — from invisible.toys

 

Star Chart -- AR and astronomy

 

 

The state of virtual reality — from furthermore.equinox.com by Rachael Schultz
How the latest advancements are optimizing performance, recovery, and injury prevention

Excerpt:

Virtual reality is increasingly used to enhance everything from museum exhibits to fitness classes. Elite athletes are using VR goggles to refine their skills, sports rehabilitation clinics are incorporating it into recovery regimes, and others are using it to improve focus and memory.

Here, some of the most exciting things happening with virtual reality, as well as what’s to come.

 

 

Augmented Reality takes 3-D printing to next level — from rtoz.org

Excerpt:

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work. To use the Robotic Modeling Assistant (RoMA), a designer wears an AR headset with hand controllers. As soon as a design feature is completed, the robotic arm prints the new feature.

 

 

 

From DSC:
How might the types of technologies being developed and used by Kazendi’s Holomeeting be used for building/enhancing learning spaces?

 

 

 

 

AR and Blockchain: A Match Made in The AR Cloud — from medium.com by Ori Inbar

Excerpt:

In my introduction to the AR Cloud I argued that in order to reach mass adoption, AR experiences need to persist in the real world across space, time, and devices.

To achieve that, we will need a persistent realtime spatial map of the world that enables sharing and collaboration of AR Experiences among many users.

And according to AR industry insiders, it’s poised to become:

“the most important software infrastructure in computing”

aka: The AR Cloud.

 

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 
© 2024 | Daniel Christian