Apple CEO Tim Cook: AR Is “Critically Important” For The Company’s Future — from vrscout.com by Bobby Carlton

Excerpts:

When the subject of AR and it’s potential came up, Cook said “You and I are having a great conversation right now. Arguably, it could even be better if we were able to augment our discussion with charts or other things to appear.”

In Cook’s opinion, AR will change the way we communicate with our friends, colleagues, and family. It’ll reshape communication in fields such as health, education, gaming, and retail. “I’m already seeing AR take off in some of these areas with use of the phone. And I think the promise is even greater in the future,” said Cook.

Also see:

Woman using Augmented Reality to further learn about something.

And it is not enough to try to use existing VR/XR applications and tailor them to educational scenarios. These tools can and should be created with pedagogy, student experience, and learning outcomes as the priority.

 

GPT-3: We’re at the very beginning of a new app ecosystem — from venturebeat.com by Dattaraj Rao

From DSC: NLP=Natural Language Processing (i.e., think voice-driven interfaces/interactivity).

Excerpt:

Despite the hype, questions persist as to whether GPT-3 will be the bedrock upon which an NLP application ecosystem will rest or if newer, stronger NLP models with knock it off its throne. As enterprises begin to imagine and engineer NLP applications, here’s what they should know about GPT-3 and its potential ecosystem.

 

The Future of Wearable Computing May Be Augmented Reality – Newest Developments in AR Glasses — from wearable-technologies.com by Cathy Russey

Excerpt:

From healthcare to factory floors, augmented reality glasses are aiding people in various professions do their job efficiently. Doctors use them to conduct precise surgery and factory workers use them for productivity, efficiency, and safety. AR glasses have been identified as a vital technology supporting shop-floor operators in the smart factories of the future.

 
 

It’s Time to Heal: 16 Trends Driving the Future of Bio and Healthcare — from a16z.com by Vineeta Agarwala, Jorge Conde, Vijay Pande, and Julie Yoo
It’s Time to Heal is a special package about engineering the future of bio and healthcare. See more at:

Also see:

5 Predictions for Digital Healthcare in 2021 — from wearable-technologies.com by Cathy Russey

Excerpts:

  1. Remote patient care and telemedicine
  2. Virtual Reality
  3. Wearables
  4. Artificial Intelligence
  5. Advancements in Electronic Health Records (EHR)
 

From DSC:
I was thinking about projecting images, animation, videos, etc. from a device onto a wall for all in the room to see.

  • Will more walls of the future be like one of those billboards (that presents two or three different images) and could change surfaces?

One side of the surface would be more traditional (i.e., a sheet wall type of surface). The other side of the surface would be designed to be excellent for projecting images onto it and/or for use by Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR).

Along these lines, here’s another item related to Human-Computer Interaction (HCI):

Mercedes-Benz debuts dashboard that’s one giant touchscreen — from futurism.com

 

Learning experience designs of the future!!! [Christian]

From DSC:
The article below got me to thinking about designing learning experiences and what our learning experiences might be like in the future — especially after we start pouring much more of our innovative thinking, creativity, funding, entrepreneurship, and new R&D into technology-supported/enabled learning experiences.


LMS vs. LXP: How and why they are different — from blog.commlabindia.com by Payal Dixit
LXPs are a rising trend in the L&D market. But will they replace LMSs soon? What do they offer more than an LMS? Learn more about LMS vs. LXP in this blog.

Excerpt (emphasis DSC):

Building on the foundation of the LMS, the LXP curates and aggregates content, creates learning paths, and provides personalized learning resources.

Here are some of the key capabilities of LXPs. They:

  • Offer content in a Netflix-like interface, with suggestions and AI recommendations
  • Can host any form of content – blogs, videos, eLearning courses, and audio podcasts to name a few
  • Offer automated learning paths that lead to logical outcomes
  • Support true uncensored social learning opportunities

So, this is about the LXP and what it offers; let’s now delve into the characteristics that differentiate it from the good old LMS.


From DSC:
Entities throughout the learning spectrum are going through many changes right now (i.e., people and organizations throughout K-12, higher education, vocational schools, and corporate training/L&D). If the first round of the Coronavirus continues to impact us, and then a second round comes later this year/early next year, I can easily see massive investments and interest in learning-related innovations. It will be in too many peoples’ and organizations’ interests not to.

I highlighted the bulleted points above because they are some of the components/features of the Learning from the Living [Class] Room vision that I’ve been working on.

Below are some technologies, visuals, and ideas to supplement my reflections. They might stir the imagination of someone out there who, like me, desires to make a contribution — and who wants to make learning more accessible, personalized, fun, and engaging. Hopefully, future generations will be able to have more choice, more control over their learning — throughout their lifetimes — as they pursue their passions.

Learning from the living class room

In the future, we may be using MR to walk around data and to better visualize data


AR and VR -- the future of healthcare

 

 

XRHealth launches first virtual reality telehealth clinic — from wearable-technologies.com by Sam Draper

Excerpt:

XRHealth (formerly VRHealth), a leading provider of extended reality and therapeutic applications, announced the first virtual reality (VR) telehealth clinic that will provide VR therapy to patients. VR telehealth clinicians providing care are currently certified in Massachusetts, Connecticut, Florida, Michigan, Washington D.C., Delaware, California, New York, and North Carolina and will be expanding their presence in additional states in the coming months. The XRHealth telehealth services are covered by Medicare and most major insurance providers.

 

 

FTI 2020 Trend Report for Entertainment, Media, & Technology [FTI]

 

FTI 2020 Trend Report for Entertainment, Media, & Technology — from futuretodayinstitute.com

Our 3rd annual industry report on emerging entertainment, media and technology trends is now available.

  • 157 trends
  • 28 optimistic, pragmatic and catastrophic scenarios
  • 10 non-technical primers and glossaries
  • Overview of what events to anticipate in 2020
  • Actionable insights to use within your organization

KEY TAKEAWAYS

  • Synthetic media offers new opportunities and challenges.
  • Authenticating content is becoming more difficult.
  • Regulation is coming.
  • We’ve entered the post-fixed screen era.
  • Voice Search Optimization (VSO) is the new Search Engine Optimization (SEO).
  • Digital subscription models aren’t working.
  • Advancements in AI will mean greater efficiencies.

 

 

From DSC:
First of all, an article:

The four definitive use cases for AR and VR in retail — from forbes.com by Nikki Baird

AR in retail

Excerpt (emphasis DSC):

AR is the go-to engagement method of choice when it comes to product and category exploration. A label on a product on a shelf can only do so much to convey product and brand information, vs. AR, which can easily tap into a wealth of digital information online and bring it to life as an overlay on a product or on the label itself.

 

From DSC:
Applying this concept to the academic world…what might this mean for a student in a chemistry class who has a mobile device and/or a pair of smart goggles on and is working with an Erlenmeyer flask? A burette? A Bunsen burner?

Along these lines...what if all of those confused students — like *I* was struggling through chem lab — could see how an experiment was *supposed to be done!?*

That is, if there’s only 30 minutes of lab time left, the professor or TA could “flip a switch” to turn on the AR cloud within the laboratory space to allow those struggling students to see how to do their experiment.

I can’t tell you how many times I was just trying to get through the lab — not knowing what I was doing, and getting zero help from any professor or TA. I hardly learned a thing that stuck with me…except the names of a few devices and the abbreviations of a few chemicals. For the most part, it was a waste of money. How many students experience this as well and feel like I did?

Will the terms “blended learning” and/or “hybrid learning” take on whole new dimensions with the onset of AR, MR, and VR-related learning experiences?

#IntelligentTutoring #IntelligentSystems #LearningExperiences
#AR #VR #MR #XR #ARCloud #AssistiveTechnologies
#Chemistry #BlendedLearning #HybridLearning #DigitalLearning

 

Also see:

 

“It is conceivable that we’re going to be moving into a world without screens, a world where [glasses are] your screen. You don’t need any more form factor than [that].”

(AT&T CEO)

 

 

Philips, Microsoft Unveils Augmented Reality Concept for Operating Room of the Future — from hitconsultant.net by Fred Pennic

Excerpt:

Health technology company Philips unveiled a unique mixed reality concept developed together with Microsoft Corp. for the operating room of the future. Based on the state-of-the-art technologies of Philips’Azurion image-guided therapy platform and Microsoft’s HoloLens 2 holographic computing platform, the companies will showcase novel augmented reality applications for image-guided minimally invasive therapies.

 

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

 
 
© 2021 | Daniel Christian