Apple’s $3,499 Vision Pro AR headset is finally here — from techcrunch.com by Brian Heater

Image of the Vision Pro AR headset from Apple

Image Credits: Apple

Excerpts:

“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”


Reflections from Scott Belsky re: the Vision Pro — from implications.com


Apple WWDC 2023: Everything announced from the Apple Vision Pro to iOS 17, MacBook Air and more — from techcrunch.com by Christine Hall



Apple unveils new tech — from therundown.ai (The Rundown)

Here were the biggest things announced:

  • A 15” Macbook Air, now the thinnest 15’ laptop available
  • The new Mac Pro workstation, presumably a billion dollars
  • M2 Ultra, Apple’s new super chip
  • NameDrop, an AirDrop-integrated data-sharing feature allowing users to share contact info just by bringing their phones together
  • Journal, an ML-powered personalized journalling app
  • Standby, turning your iPhone into a nightstand alarm clock
  • A new, AI-powered update to autocorrect (finally)
  • Apple Vision Pro


Apple announces AR/VR headset called Vision Pro — from joinsuperhuman.ai by Zain Kahn

Excerpt:

“This is the first Apple product you look through and not at.” – Tim Cook

And with those famous words, Apple announced a new era of consumer tech.

Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.

Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.


From DSC:
Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?

I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.


 


From DSC:
I also wanted to highlight the item below, which Barsee also mentioned above, as it will likely hit the world of education and training as well:



Also relevant/see:


 

Brainyacts #57: Education Tech— from thebrainyacts.beehiiv.com by Josh Kubicki

Excerpts:

Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.

  1. Personalized learning
  2. Virtual tutors and coaches
  3. Interactive simulations
  4. Enhanced course materials
  5. Collaborative learning
  6. Automated assessment and feedback
  7. Continuous improvement
  8. Accessibility and inclusivity

AI Will Democratize Learning — from td.org by Julia Stiglitz and Sourabh Bajaj

Excerpts:

In particular, we’re betting on four trends for AI and L&D.

  1. Rapid content production
  2. Personalized content
  3. Detailed, continuous feedback
  4. Learner-driven exploration

In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.

Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.

The Next Evolution in Educational Technologies and Assisted Learning Enablement — from educationoneducation.substack.com by Jeannine Proctor

Excerpt:

Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.

How Generative AI Will Enable Personalized Learning Experiences — from campustechnology.com by Rhea Kelly

Excerpt:

With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different — from the74million.org by John Bailey; with thanks to GSV for this resource

Excerpts:

There are four reasons why this generation of AI tools is likely to succeed where other technologies have failed:

    1. Smarter capabilities
    2. Reasoning engines
    3. Language is the interface
    4. Unprecedented scale

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier — from blogs.nvidia.com by Aaron Lefohn
NVIDIA will present around 20 research papers at SIGGRAPH, the year’s most important computer graphics conference.

Excerpt:

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

 

Also relevant to the item from Nvidia (above), see:

Unreal Engine’s Metahuman Creator — with thanks to Mr. Steven Chevalia for this resource

Excerpt:

MetaHuman is a complete framework that gives any creator the power to use highly realistic human characters in any way imaginable.

It includes MetaHuman Creator, a free cloud-based app that enables you to create fully rigged photorealistic digital humans in minutes.

From Unreal Engine -- Dozens of ready-made MetaHumans are at your fingertips.

 

35 Ways Real People Are Using A.I. Right Now — from nytimes.com by Francesca Paris and Larry Buchanan

From DSC:
It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may find Elicit, Scholarcy and Scite to be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things.
.

Snapshot of a query result from a tool called Elicit


 

There Is No A.I. — from newyorker.com by Jaron Lanier
There are ways of controlling the new technology—but first we have to stop mythologizing it.

Excerpts:

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

 


 

Resource per Steve Nouri on LinkedIn


 

Meet Adobe Firefly. Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Meet Adobe Firefly. — from adobe.com
Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Generative AI made for creators.
With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.


Also relevant/see:

Gen-2 from runway Research -- the next step forward for generative AI

Gen-2: The Next Step Forward for Generative AI — from research.runwayml.com
A multi-modal AI system that can generate novel videos with text, images, or video clips.

Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 
 

“Tech predictions for 2023 and beyond” — from allthingsdistributed.com by Werner Vogels, Chief Technology Officer at Amazon

Excerpts:

  • Prediction 1: Cloud technologies will redefine sports as we know them
  • Prediction 2: Simulated worlds will reinvent the way we experiment
  • Prediction 3: A surge of innovation in smart energy
  • Prediction 4: The upcoming supply chain transformation
  • Prediction 5: Custom silicon goes mainstream
 

For those majoring in Engineering

 

 

 

6 trends are driving the use of #metaverse tech today. These trends and technologies will continue to drive its use over the next 3 to 5 years:

1. Gaming
2. Digital Humans
3. Virtual Spaces
4. Shared Experiences
5. Tokenized Assets
6. Spatial Computing
#GartnerSYM

.

“Despite all of the hype, the adoption of #metaverse tech is nascent and fragmented.” 

.

Also relevant/see:

According to Apple CEO Tim Cook, the Next Internet Revolution Is Not the Metaverse. It’s This — from inc.com by Nick Hobson
The metaverse is just too wacky and weird to be the next big thing. Tim Cook is betting on AR.

Excerpts:

While he might know a thing or two about radical tech, to him it’s unconvincing that the average person sufficiently understands the concept of the metaverse enough to meaningfully incorporate it into their daily life.

The metaverse is just too wacky and weird.

And, according to science, he might be on to something.

 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

Video games dreamed up other worlds. Now they’re coming for real architecture — from fastcompany.com by Nate Berg
A marriage between Epic Games and Autodesk could help communities see exactly what’s coming their way with new construction.

Excerpt:

Video games and architectural models are about to form a long overdue union. Epic Games and design software maker Autodesk are joining forces to help turn the utilitarian digital building models used by architects and designers from blocky representations into immersive spaces in which viewers can get a sense of a room’s dimensions and see how the light changes throughout the day. For both designers and the clients they’re designing for, this could help make architecture more nimble and understandable.

The AutoCAD model (top) and Twinmotion render (bottom) [Images: courtesy Autodesk]

Integrating Twinmotion software into Revit essentially shortens the time-sucking process of rendering models into high-resolution images, animations, and virtual-reality walkthroughs from hours to seconds. “If you want to see your design in VR, in Twinmotion you push the VR button,” says Epic Games VP Marc Petit. “You want to share a walkthrough on the cloud, you can do that.”


From DSC:
An interesting collaboration! Perhaps this will be useful for those designing/implementing learning spaces as well.


 

Apple just quietly gave us the golden key to unlock the Metaverse — from medium.com by Klas Holmlund; with thanks to Ori Inbar out on Twitter for this resource

Excerpt:

But the ‘Oh wow’ moment came when I pointed the app at a window. Or a door. Because with a short pause, a correctly placed 3D model of the window snapped in place. Same with a door. But the door could be opened or closed. RoomPlan did not care. It understands a door. It understands a chair. It understands a cabinet. And when it sees any of these things, it places a model of them, with the same dimensions, in the model.

Oh, the places you will go!
OK, so what will this mean to Metaverse building? Why is this a big deal? Well, to someone who is not a 3D modeler, it is hard to overstate what amount of work has to go into generating useable geometry. The key word, here, being useable. To be able to move around, exist in a VR space it has to be optimized. You’re not going to have a fun party if your dinner guests fall through a hole in reality. This technology will let you create a fully digital twin of any space you are in in the space of time it takes you to look around.

In a future Apple VR or AR headset, this technology will obviuosly be built in. You will build a VR capable digital twin of any space you are in just by wearing the headset. All of this is optimized.

Also with thanks to Ori Inbar:


Somewhat relevant/see:

“The COVID-19 pandemic spurred us to think creatively about how we can train the next generation of electrical construction workers in a scalable and cost-effective way,” said Beau Pollock, president and CEO of TRIO Electric. “Finding electrical instructors is difficult and time-consuming, and training requires us to use the same materials that technicians use on the job. The virtual simulations not only offer learners real-world experience and hands-on practice before they go into the field, they also help us to conserve resources in the process.”


 

Top 5 Developments in Web 3.0 We Will See in the Next Five Years — from intelligenthq.com

Excerpt:

Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.

Here are the top 5 developments in web 3.0 expected in the coming five years.
.

 

European telco giants collaborate on 5G-powered holographic videocalls — from inavateonthenet.net

Excerpt:

Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.

Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market

Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality.
.

Top XR Vendors Majoring in Education for 2022 — from xrtoday.com

Excerpt:

Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.

In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.

With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.

 

Five Impossible Figure Illusions — from theawesomer.com

Speaking of creativity, check these other ones out as well!

Everyday Objects and Buildings Float Atmospherically in Cinta Vidal’s Perception-Bending Murals — from by Kate Mothes and Cinta Vidal

“Public Space” (August 2022) in Toftlund, Denmark, curated by Kunstbureau Kolossal. All images © Cinta Vidal

 

Artist Spotlight: Arthur Maslard a.k.a. Ratur — from booooooom.com

 
© 2022 | Daniel Christian