DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

OpenAI Says DALL-E Is Generating Over 2 Million Images a Day—and That’s Just Table Stakes — from singularityhub.com by Jason Dorrier

Excerpt:

The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.

Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.

And that was before OpenAI opened DALL-E 2 to everyone.

 


From DSC:
Further on down that Tweet is this example image — wow!
.

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

.


On the video side of things, also relevant/see:

Meta’s new text-to-video AI generator is like DALL-E for video — from theverge.com by James Vincent
Just type a description and the AI generates matching footage

A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta


From DSC:
Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?


 

Inclusive Education For Students With Hearing Impairment — from edtechreview.in by Priyanka Gupta

Excerpt:

The following may be difficult for a student with a hearing impairment:

  • The subjects of spelling, grammar, and vocabulary
  • Making notes while listening to lectures
  • Participate, engage or understand classroom discussions
  • Understand educational videos
  • Present oral reports
 

Instructional Audio: 4 Benefits to Improving It — from techlearning.com by Erik Ofgang
Ensuring every classroom has instructional audio capabilities helps all students hear what the teacher is saying.

Excerpt (emphasis DSC):

Sound is a key component of education. If students can’t hear their instructor well, they’re clearly not going to focus or learn as much. That’s why more and more schools are investing in instructional audio systems, which are high-tech public address systems designed with classrooms, teachers, and all students in mind.

Terri Meier is director of education technology for Rio Rancho Public Schools in New Mexico where all new classrooms are being built with voice amplification systems in place and many existing classrooms are being retrofitted with similar systems. These systems are key for schools in their accessibility efforts and in providing quality instruction overall, she says.

And speaking of accessibility-related postings/items, also see:

 

Apple just quietly gave us the golden key to unlock the Metaverse — from medium.com by Klas Holmlund; with thanks to Ori Inbar out on Twitter for this resource

Excerpt:

But the ‘Oh wow’ moment came when I pointed the app at a window. Or a door. Because with a short pause, a correctly placed 3D model of the window snapped in place. Same with a door. But the door could be opened or closed. RoomPlan did not care. It understands a door. It understands a chair. It understands a cabinet. And when it sees any of these things, it places a model of them, with the same dimensions, in the model.

Oh, the places you will go!
OK, so what will this mean to Metaverse building? Why is this a big deal? Well, to someone who is not a 3D modeler, it is hard to overstate what amount of work has to go into generating useable geometry. The key word, here, being useable. To be able to move around, exist in a VR space it has to be optimized. You’re not going to have a fun party if your dinner guests fall through a hole in reality. This technology will let you create a fully digital twin of any space you are in in the space of time it takes you to look around.

In a future Apple VR or AR headset, this technology will obviuosly be built in. You will build a VR capable digital twin of any space you are in just by wearing the headset. All of this is optimized.

Also with thanks to Ori Inbar:


Somewhat relevant/see:

“The COVID-19 pandemic spurred us to think creatively about how we can train the next generation of electrical construction workers in a scalable and cost-effective way,” said Beau Pollock, president and CEO of TRIO Electric. “Finding electrical instructors is difficult and time-consuming, and training requires us to use the same materials that technicians use on the job. The virtual simulations not only offer learners real-world experience and hands-on practice before they go into the field, they also help us to conserve resources in the process.”


 

Top 5 Developments in Web 3.0 We Will See in the Next Five Years — from intelligenthq.com

Excerpt:

Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.

Here are the top 5 developments in web 3.0 expected in the coming five years.
.

 

European telco giants collaborate on 5G-powered holographic videocalls — from inavateonthenet.net

Excerpt:

Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.

Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market

Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality.
.

Top XR Vendors Majoring in Education for 2022 — from xrtoday.com

Excerpt:

Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.

In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.

With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.

 

Course Awareness in HyFlex: Managing unequal participation numbers — from hyflexlearning.org by Candice Freeman

Excerpt:

How do you teach a HyFlex course when the number of students in various participation modes is very unequal? How do you teach one student in a mode – often in the classroom? Conversely, you could ask how do you teach 50 asynchronous students with very few in the synchronous mode(s)? Answers will vary greatly depending from teacher to teacher. This article suggests a strategy called Course Awareness, a mindfulness technique designed to help teachers envision each learner as being in the instructor’s presence and engaged in the instruction regardless of participation (or attendance) mode choice.

Teaching HyFlex in an active learning classroom

From DSC:
I had understood the hyflex teaching model as addressing both online-based (i.e., virtual/not on-site) and on-site/physically-present students at the same time — and that each student could choose the manner in which they wanted to attend that day’s class. For example, on one day, a student could take the course in room 123 of Anderson Hall. The next time the class meets, that same student could attend from their dorm room.

But this article introduces — at least to me — the idea that we have a third method of participating in the hyflex model — asynchronously (i.e., not at the same time). So rather than making their way to Anderson Hall or attending from their dorm, that same student does not attend at the same time as other students (either virtually or physically). That student will likely check in with a variety of tools to catch up with — and contribute to — the discussions. As the article mentions:

Strategically, you need to employ web-based resources designed to gather real-time information over a specified period of time, capturing all students and not just students engaging live. For example, Mentimeter, PollEverywhere, and Sli.do allow the instructor to pose engaging, interactive questions without limiting engagement time to the instance the question is initially posed. These tools are designed to support both synchronous and asynchronous participation. 

So it will be interesting to see how our learning ecosystems morph in this area. Will there be other new affordances, pedagogies, and tools that take into consideration that the faculty members are addressing synchronous and asynchronous students as well as online and physically present students? Hmmm…more experimentation is needed here, as well as more:

  • Research
  • Instructional design
  • Design thinking
  • Feedback from students and faculty members

Will this type of model work best in the higher education learning ecosystem but not the K-12 learning ecosystem? Will it thrive with employees within the corporate realm? Hmmm…again, time will tell.


And to add another layer to the teaching and learning onion, now let’s talk about multimodal learning. This article, How to support multimodal learningby Monica Burns, mentions that:

Multimodal learning is a teaching concept where using different senses simultaneously helps students interact with content at a deeper level. In the same way we learn through multiple types of media in our own journey as lifelong learners, students can benefit from this type of learning experience.

The only comment I have here is that if you think that throwing a warm body into a K12 classroom fixes the problem of teachers leaving the field, you haven’t a clue how complex this teaching and learning onion is. Good luck to all of those people who are being thrown into the deep end — and essentially being told to sink or swim.

 

To Improve Outcomes for Students, We Must Improve Support for Faculty — from campustechnology.com by Dr. David Wiley
The doctoral programs that prepare faculty for their positions often fail to train them on effective teaching practices. We owe it to our students to provide faculty with the professional development they need to help learners realize their full potential.

Excerpts:

Why do we allow so much student potential to go unrealized? Why are well-researched, highly effective teaching practices not used more widely?

The doctoral programs that are supposed to prepare them to become faculty in physics, philosophy, and other disciplines don’t require them to take a single course in effective teaching practices. 

The entire faculty preparation enterprise seems to be caught in a loop, unintentionally but consistently passing on an unawareness that some teaching practices are significantly more effective than others. How do we break this cycle and help students realize their full potential as learners?

From DSC:
First of all, I greatly appreciate the work of Dr. David Wiley. His career has been dedicated to teaching and learning, open educational resources, and more. I also appreciate and agree with what David is saying here — i.e., that professors need to be taught how to teach as well as what we know about how people learn at this point in time. 

For years now, I’ve been (unpleasantly) amazed that we hire and pay our professors primarily for their research capabilities — vs. their teaching competence. At the same time, we continually increase the cost of tuition, books, and other fees. Students have the right to let their feet do the walking. As the alternatives to traditional institutions of higher education increase, I’m quite sure that we’ll see that happen more and more.

While I think that training faculty members about effective teaching practices is highly beneficial, I also think that TEAM-BASED content creation and delivery will deliver the best learning experiences that we can provide. I say this because multiple disciplines and specialists are involved, such as:

  • Subject Matter Experts (i.e., faculty members)
  • Instructional Designers
  • Graphic Designers
  • Web Designers
  • Learning Scientists; Cognitive Learning Researchers
  • Audio/Video Specialists  and Learning Space Designers/Architects
  • CMS/LMS Administrators
  • Programmers
  • Multimedia Artists who are skilled in working with digital audio and digital video
  • Accessibility Specialists
  • Librarians
  • Illustrators and Animators
  • and more

The point here is that one person can’t do it all — especially now that the expectation is that courses should be offered in a hybrid format or in an online-based format. For a solid example of the power of team-based content creation/delivery, see this posting.

One last thought/question here though. Once a professor is teaching, are they open to working with and learning from the Instructional Designers, Learning Scientists, and/or others from the Teaching & Learning Centers that do exist on their campus? Or do they, like many faculty members, think that such people are irrelevant because they aren’t faculty members themselves? Oftentimes, faculty members look to each other and don’t really care what support is offered (unless they need help with some of the technology.)


Also relevant/see:


 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

From DSC:
I wanted to pass this along to the learning space designers out there in case it’s helpful to them.

“The sound absorbing Sola felt is made from 50% post-consumer recycled PET.”

Source


Acoustic Static Links lighting by LightArt — from dezeen.com

 

Dive Into AI, Avatars and the Metaverse With NVIDIA at SIGGRAPH — from blogs.nvidia.com

Excerpt:

Innovative technologies in AI, virtual worlds and digital humans are shaping the future of design and content creation across every industry. Experience the latest advances from NVIDIA in all these areas at SIGGRAPH, the world’s largest gathering of computer graphics experts, [which ran from Aug. 8-11].

At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution — from blogs.nvidia.com by Rick Merritt
NVIDIA unveils new products and research to transform industries with AI, the metaverse and digital humans.

NVIDIA AI Makes Performance Capture Possible With Any Camera — from blogs.nvidia.com by Isha Salian
Derivative, Notch, Pixotope and others use NVIDIA Vid2Vid Cameo and 3D body-pose estimation tools to drive performances in real time.

How to Start a Career in AI — from blogs.nvidia.com by Brian Caulfield
Four most important steps to starting a career in AI, seven big questions answered.

As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky — from blogs.nvidia.com by Richard Kerris
Omniverse AI-enabled search tool lets legendary studio sift through massive database of 3D scenes.

Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address — from blogs.nvidia.com by Gerardo Degaldo
Major NVIDIA Omniverse updates power 3D virtual worlds, digital twins and avatars, reliably boosted by August NVIDIA Studio Driver; #MadeInMachinima contest winner revealed.

What Is Direct and Indirect Lighting? — from blogs.nvidia.com by JJ Kim
In computer graphics, the right balance between direct and indirect lighting elevates the photorealism of a scene.

NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class — from blogs.nvidia.com by Gerardo Degaldo
Designed for creativity and speed, Studio laptops are the ultimate creative tool for aspiring 3D artists, video editors, designers and photographers.

Design in the Age of Digital Twins: A Conversation With Graphics Pioneer Donald Greenberg — from blogs.nvidia.com by Rick Merritt
From his Cornell office, home to a career of 54 years and counting, he shares with SIGGRAPH attendees his latest works in progress.

 

Augmented Books Are On The Way According To Researchers — from vrscout.com by Kyle Melnick

Excerpt:

Imagine this. You’re several chapters into a captivating novel when a character from an earlier book makes a surprise appearance. You swipe your finger across their name on the page at which point their entire backstory is displayed on a nearby smartphone, allowing you to refresh your memory before moving forward.

This may sound like science fiction, but researchers at the University of Surrey in England say that the technology described above is already here in the form of “a-books” (augmented reality books).

The potential use-cases for such a technology are virtually endless. As previously mentioned, a-books could be used to deliver character details and plot points for a variety of fictional works. The same technology could also be applied to textbooks, allowing students to display helpful information on their smartphones, tablets, and smart TVs with the swipe of a finger.

From DSC:

  • How might instructional designers use this capability?
  • How about those in theatre/drama?
  • Educational gaming?
  • Digital storytelling?
  • Interaction design?
  • Interface design?
  • User experience design?

Also see:


 
 

The Metaverse Is Not a Place — from oreilly.com by Tim O’Reilly
It’s a communications medium.

Excerpt:

Foundations of the metaverse
You can continue this exercise by thinking about the metaverse as the combination of multiple technology trend vectors progressing at different speeds and coming from different directions, and pushing the overall vector forward (or backward) accordingly. No new technology is the product of a single vector.

So rather than settling on just “the metaverse is a communications medium,” think about the various technology vectors besides real-time communications that are coming together in the current moment. What news from the future might we be looking for?

  • Virtual Reality/Augmented Reality
  • Social media
  • Gaming
  • AI
  • Cryptocurrencies and “Web3”
  • Identity

#metaverse #AI #communications #gaming #socialmedia #cryptocurrencies #Web3 #identity #bots #XR #VR #emergingtechnologies

 
© 2024 | Daniel Christian