VR system to be used to prepare crime victims for court — from inavateonthenet.net

Excerpt:

An innovative VR system is being used to help victims of crime prepare for giving evidence in court, allowing victims to engage with key members of the judicial process virtually.

The system, designed by Immersonal, is to be rolled out across 52 Scottish courts over the next year, with the technology also being piloted in the Hague as part of the International Criminal Court. The aim is to dissuade the fears and discomfort of victims and witnesses who may be unfamiliar with the court process.
.

VR system to be used to prepare crime victims for court

.


.

Here’s another interesting item for you…one that also may eventually be XR-related:
.

 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 

From DSC:
After seeing this…

…I wondered:

  • Could GPT-4 create the “Choir Practice” app mentioned below?
    (Choir Practice was an idea for an app for people who want to rehearse their parts at home)
  • Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?

This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”

Choir Practice -- an app for people who want to rehearse at home

And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days.  🙂

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 

Mixed media online project serves as inspiration for student journalists — from jeadigitalmedia.org by Michelle Balmeo

Excerpt:

If you’re on the hunt for inspiration, go check out Facing Life: Eight stories of life after life in California’s prisons.

This project, created by Pendarvis Harshaw and Brandon Tauszik, has so many wonderful and original storytelling components, it’s the perfect model for student journalists looking for ways to tell important stories online.

Facing Life -- eight stories of life after life in California's prisons

 

Best Document Cameras for Teachers — from techlearning.com by Luke Edwards
Get the best document camera for teachers to make the classroom more digitally immersive

Along the lines of edtech, also see:

Tech & Learning Names Winners of the Best of 2022 Awards — from techlearning.com by TL Editors
This annual award celebrates recognizing the very best in EdTech from 2022

.
The Tech & Learning Awards of Excellence: Best of 2022 celebrate educational technology from the last 12 months that has excelled in supporting teachers, students, and education professionals in the classroom, for professional development, or general management of education resources and learning. Nominated products are divided into three categories: Primary, Secondary, or Higher Education.

 

94% of Consumers are Satisfied with Virtual Primary Care — from hitconsultant.net

Excerpt from What You Should Know (emphasis DSC):

  • For people who have used virtual primary care, the vast majority of them (94%) are satisfied with their experience, and nearly four in five (79%) say it has allowed them to take charge of their health. The study included findings around familiarity and experience with virtual primary care, virtual primary care and chronic conditions, current health and practices, and more.
  • As digital health technology continues to advance and the healthcare industry evolves, many Americans want the ability to utilize more digital methods when it comes to managing their health, according to a study recently released by Elevance Health — formerly Anthem, Inc. Elevance Health commissioned to conduct an online study of over 5,000 US adults age 18+ around virtual primary care.
 

The talent needed to adopt mobile AR in industry — from chieflearningofficer.com by Yao Huang Ph.D.

Excerpt:

Therefore, when adopting mobile AR to improve job performance, L&D professionals need to shift their mindset from offering training with AR alone to offering performance support with AR in the middle of the workflow.

The learning director from a supply chain industry pointed out that “70 percent of the information needed to build performance support systems already exists. The problem is it is all over the place and is available on different systems.”

It is the learning and development professional’s job to design a solution with the capability of the technology and present it in a way that most benefits the end users.

All participants revealed that mobile AR adoption in L&D is still new, but growing rapidly. L&D professionals face many opportunities and challenges. Understanding the benefits, challenges and opportunities of mobile AR used in the workplace is imperative.

A brief insert from DSC:
Augmented Reality (AR) is about to hit the mainstream in the next 1-3 years. It will connect the physical world with the digital world in powerful, helpful ways (and likely in negative ways as well). I think it will be far bigger and more commonly used than Virtual Reality (VR). (By the way, I’m also including Mixed Reality (MR) within the greater AR domain.) With Artificial Intelligence (AI) making strides in object recognition, AR could be huge.

Learning & Development groups should ask for funding soon — or develop proposals for future funding as the new hardware and software products mature — in order to upskill at least some members of their groups in the near future.

As within Teaching & Learning Centers within higher education, L&D groups need to practice what they preach — and be sure to train their own people as well.

 

From DSC:
I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.

It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets.  Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.

One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)).  If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.

For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.

And/or perhaps this is a feature in our future videoconferencing applications.

But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.

Along these lines, see:

.

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

Is a teleprompter a feature in our future Augmented Reality (AR) glasses?

 

Making a Digital Window Wall from TVs — from theawesomer.com

Drew Builds Stuff has an office in the basement of his parents’ house. Because of its subterranean location, it doesn’t get much light. To brighten things up, he built a window wall out of three 75? 4K TVs, resulting in a 12-foot diagonal image. Since he can load up any video footage, he can pretend to be anywhere on Earth.

From DSC:
Perhaps some ideas here for learning spaces!

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Top Tools for Learning 2022 [Jane Hart]

Top Tools for Learning 2022

 

Top tools for learning 2022 — from toptools4learning.com by Jane Hart

Excerpt:

In fact, it has become clear that whilst 2021 was the year of experimentation – with an explosion of tools being used as people tried out new things, 2022 has been the year of consolidation – with people reverting to their trusty old favourites. In fact, many of the tools that were knocked off their perches in 2021, have now recovered their lost ground this year.


Also somewhat relevant/see:


 

The Metaverse in 2040 — from pewresearch.org by Janna Anderson and Lee Rainie
Hype? Hope? Hell? Maybe all three. Experts are split about the likely evolution of a truly immersive ‘metaverse.’ They expect that augmented- and mixed-reality enhancements will become more useful in people’s daily lives. Many worry that current online problems may be magnified if Web3 development is led by those who built today’s dominant web platforms

 

The metaverse will, at its core, be a collection of new and extended technologies. It is easy to imagine that both the best and the worst aspects of our online lives will be extended by being able to tap into a more-complete immersive experience, by being inside a digital space instead of looking at one from the outside.

Laurence Lannom, vice president at the Corporation for National Research Initiatives

“Virtual, augmented and mixed reality are the gateway to phenomenal applications in medicine, education, manufacturing, retail, workforce training and more, and it is the gateway to deeply social and immersive interactions – the metaverse.

Elizabeth Hyman, CEO for the XR Association

 


 

The table of contents for the Metaverse in 2040 set of articles out at Pew Research dot org -- June 30, 2022

 


 

Meet the metaverse: Creating real value in a virtual world — from mckinsey.com with Eric Hazan and Lareina Yee

Excerpt (emphasis DSC):

Welcome to the metaverse. Now, where exactly are we? Imagine for a moment the next iteration of the internet, seamlessly combining our physical and digital lives. It’s many things: a gaming platform, a virtual retail spot, a training tool, an advertising channel, a digital classroom, a gateway to entirely new virtual experiences. While the metaverse continues to be defined, its potential to unleash the next wave of digital disruption is clear. In the first five months of 2022, more than $120 billion have been invested in building out metaverse technology and infrastructure. That’s more than double the $57 billion invested in all of 2021.

How would you define the metaverse?
Lareina: What’s exciting is that the metaverse, like the internet, is the next platform on which we can work, live, connect, and collaborate. It’s going to be an immersive virtual environment that connects different worlds and communities. There are going to be creators and alternative currencies that you can buy and sell things with. It will have a lot of the components of Web3 and gaming and AR, but it will be much larger.

Also relevant/see:


Also relevant/see:


 

Conduct Your Own Virtual Orchestra In Maestro VR — from vrscout.com by Kyle Melnick

Niantic moves beyond games with Lightship AR platform and a social network — from theverge.com by Alex Heath
The maker of Pokémon Go is releasing its AR map for other apps and a location-based social network called Campfire

Excerpt:

Niantic made a name for itself in the mobile gaming industry through the enduring success of Pokémon Go. Now the company is hoping to become something else: a platform for other developers to build location-aware AR apps on top of.

disguise launches Metaverse Solutions division enabling next-level extended reality experiences — from etnow.com

Excerpt:

UK – disguise, the visual storytelling platform and market leader for extended reality (xR) solutions has launched its Metaverse Solutions division to enable the next generation of extraordinary live, virtual production and audiovisual location-based experiences for the metaverse.

The recent rise of real-time 3D graphics rendering capabilities in gaming platforms means that today’s audiences are craving richer, more immersive experiences that are delivered via the metaverse. While the metaverse is already defined as an $8 trillion dollar opportunity by Goldman Sachs, companies are still finding it challenging to navigate the technical elements needed to start building metaverse experiences.

On this item, also see:

disguise.one

disguise launches Metaverse Solutions division — from televisual.com by

Excerpt:

“Our xR technology combines key metaverse building blocks including real-time 3D graphics, spatial technologies and advanced display interfaces – all to deliver a one-of-a-kind gateway to the metaverse,” says disguise CXO and head of Metaverse Solutions Alex Wills.

 
© 2022 | Daniel Christian