35 Ways Real People Are Using A.I. Right Now — from nytimes.com by Francesca Paris and Larry Buchanan

From DSC:
It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may find Elicit, Scholarcy and Scite to be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things.
.

Snapshot of a query result from a tool called Elicit


 

There Is No A.I. — from newyorker.com by Jaron Lanier
There are ways of controlling the new technology—but first we have to stop mythologizing it.

Excerpts:

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

 


 

Resource per Steve Nouri on LinkedIn


 

Meet Adobe Firefly. Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Meet Adobe Firefly. — from adobe.com
Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Generative AI made for creators.
With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.


Also relevant/see:

Gen-2 from runway Research -- the next step forward for generative AI

Gen-2: The Next Step Forward for Generative AI — from research.runwayml.com
A multi-modal AI system that can generate novel videos with text, images, or video clips.

Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 
 

“Tech predictions for 2023 and beyond” — from allthingsdistributed.com by Werner Vogels, Chief Technology Officer at Amazon

Excerpts:

  • Prediction 1: Cloud technologies will redefine sports as we know them
  • Prediction 2: Simulated worlds will reinvent the way we experiment
  • Prediction 3: A surge of innovation in smart energy
  • Prediction 4: The upcoming supply chain transformation
  • Prediction 5: Custom silicon goes mainstream
 

For those majoring in Engineering

 

 

 

6 trends are driving the use of #metaverse tech today. These trends and technologies will continue to drive its use over the next 3 to 5 years:

1. Gaming
2. Digital Humans
3. Virtual Spaces
4. Shared Experiences
5. Tokenized Assets
6. Spatial Computing
#GartnerSYM

.

“Despite all of the hype, the adoption of #metaverse tech is nascent and fragmented.” 

.

Also relevant/see:

According to Apple CEO Tim Cook, the Next Internet Revolution Is Not the Metaverse. It’s This — from inc.com by Nick Hobson
The metaverse is just too wacky and weird to be the next big thing. Tim Cook is betting on AR.

Excerpts:

While he might know a thing or two about radical tech, to him it’s unconvincing that the average person sufficiently understands the concept of the metaverse enough to meaningfully incorporate it into their daily life.

The metaverse is just too wacky and weird.

And, according to science, he might be on to something.

 

DSC: What?!?! How might this new type of “parallel reality” impact smart classrooms, conference rooms, and board rooms? And/or our living rooms? Will it help deliver more personalized learning experiences within a classroom?


 

Video games dreamed up other worlds. Now they’re coming for real architecture — from fastcompany.com by Nate Berg
A marriage between Epic Games and Autodesk could help communities see exactly what’s coming their way with new construction.

Excerpt:

Video games and architectural models are about to form a long overdue union. Epic Games and design software maker Autodesk are joining forces to help turn the utilitarian digital building models used by architects and designers from blocky representations into immersive spaces in which viewers can get a sense of a room’s dimensions and see how the light changes throughout the day. For both designers and the clients they’re designing for, this could help make architecture more nimble and understandable.

The AutoCAD model (top) and Twinmotion render (bottom) [Images: courtesy Autodesk]

Integrating Twinmotion software into Revit essentially shortens the time-sucking process of rendering models into high-resolution images, animations, and virtual-reality walkthroughs from hours to seconds. “If you want to see your design in VR, in Twinmotion you push the VR button,” says Epic Games VP Marc Petit. “You want to share a walkthrough on the cloud, you can do that.”


From DSC:
An interesting collaboration! Perhaps this will be useful for those designing/implementing learning spaces as well.


 

Apple just quietly gave us the golden key to unlock the Metaverse — from medium.com by Klas Holmlund; with thanks to Ori Inbar out on Twitter for this resource

Excerpt:

But the ‘Oh wow’ moment came when I pointed the app at a window. Or a door. Because with a short pause, a correctly placed 3D model of the window snapped in place. Same with a door. But the door could be opened or closed. RoomPlan did not care. It understands a door. It understands a chair. It understands a cabinet. And when it sees any of these things, it places a model of them, with the same dimensions, in the model.

Oh, the places you will go!
OK, so what will this mean to Metaverse building? Why is this a big deal? Well, to someone who is not a 3D modeler, it is hard to overstate what amount of work has to go into generating useable geometry. The key word, here, being useable. To be able to move around, exist in a VR space it has to be optimized. You’re not going to have a fun party if your dinner guests fall through a hole in reality. This technology will let you create a fully digital twin of any space you are in in the space of time it takes you to look around.

In a future Apple VR or AR headset, this technology will obviuosly be built in. You will build a VR capable digital twin of any space you are in just by wearing the headset. All of this is optimized.

Also with thanks to Ori Inbar:


Somewhat relevant/see:

“The COVID-19 pandemic spurred us to think creatively about how we can train the next generation of electrical construction workers in a scalable and cost-effective way,” said Beau Pollock, president and CEO of TRIO Electric. “Finding electrical instructors is difficult and time-consuming, and training requires us to use the same materials that technicians use on the job. The virtual simulations not only offer learners real-world experience and hands-on practice before they go into the field, they also help us to conserve resources in the process.”


 

Top 5 Developments in Web 3.0 We Will See in the Next Five Years — from intelligenthq.com

Excerpt:

Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.

Here are the top 5 developments in web 3.0 expected in the coming five years.
.

 

European telco giants collaborate on 5G-powered holographic videocalls — from inavateonthenet.net

Excerpt:

Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.

Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market

Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality.
.

Top XR Vendors Majoring in Education for 2022 — from xrtoday.com

Excerpt:

Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.

In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.

With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.

 

Five Impossible Figure Illusions — from theawesomer.com

Speaking of creativity, check these other ones out as well!

Everyday Objects and Buildings Float Atmospherically in Cinta Vidal’s Perception-Bending Murals — from by Kate Mothes and Cinta Vidal

“Public Space” (August 2022) in Toftlund, Denmark, curated by Kunstbureau Kolossal. All images © Cinta Vidal

 

Artist Spotlight: Arthur Maslard a.k.a. Ratur — from booooooom.com

 

Bring Real-Time 3D Into the Classroom, and Teach for the Future — from edsurge.com by Melissa Oldrin and Davis Hepnar

Excerpt:

Real-time 3D (RT3D) is redefining interactive content. No longer confined to the realm of video games, this technology now plays key roles in industries as wide-ranging as architecture, medicine, automotive, aerospace and film.

Demand is growing rapidly for developers, programmers and artists skilled in working with Unity—the leading platform for creating and operating real-time 3D content. As use cases expand, and the much-discussed metaverse takes shape, educators today have an opportunity to prepare their students for the technology careers of tomorrow.

Real-time 3D is a technology that creates three-dimensional models, environments and complete virtual worlds that can be rendered instantly. This content goes far beyond traditional formats like film, television and print because it isn’t static; it’s both immersive and interactive. And it offers incredibly lifelike graphics while giving users precise, immediate control over their experience. In doing so, RT3D creates endless possibilities for media production and engagement.

 

3D Scanner Lets You Capture The Real World In VR — from vrscout.com by Kyle Melnick

Excerpt:

VR is about to get a whole lot more real.

Imagine having the power to capture your real-world environment as a hyper-realistic 3D model from the palm of your hand. Well, wonder no more, as peel 3d, a developer of professional-grade 3D scanners, today announced the launch of peel 3 and peel 3.CAD, two new easy-to-use 3D scanners capable of generating high-quality 3D scans for a wide variety of digital mediums, including VR and augmented reality (AR).

 

NASA & Google Partner To Create An AR Solar System — from vrscout.com by Kyle Melnick

Excerpt:

[On 9/14/22], Google Arts & Culture announced that is has partnered with NASA to further extend its virtual offerings with a new online exhibit featuring a collection of new-and-improved 3D models of our universe brought to life using AR technology.

These 3D models are for more than just entertainment, however. The virtual solar system exhibit features historical annotations that, when selected, display valuable information. Earth’s moon, for example, features landing sites for Apollo 11 and China’s Chang’e-4.

 

Dive Into AI, Avatars and the Metaverse With NVIDIA at SIGGRAPH — from blogs.nvidia.com

Excerpt:

Innovative technologies in AI, virtual worlds and digital humans are shaping the future of design and content creation across every industry. Experience the latest advances from NVIDIA in all these areas at SIGGRAPH, the world’s largest gathering of computer graphics experts, [which ran from Aug. 8-11].

At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution — from blogs.nvidia.com by Rick Merritt
NVIDIA unveils new products and research to transform industries with AI, the metaverse and digital humans.

NVIDIA AI Makes Performance Capture Possible With Any Camera — from blogs.nvidia.com by Isha Salian
Derivative, Notch, Pixotope and others use NVIDIA Vid2Vid Cameo and 3D body-pose estimation tools to drive performances in real time.

How to Start a Career in AI — from blogs.nvidia.com by Brian Caulfield
Four most important steps to starting a career in AI, seven big questions answered.

As Far as the AI Can See: ILM Uses Omniverse DeepSearch to Create the Perfect Sky — from blogs.nvidia.com by Richard Kerris
Omniverse AI-enabled search tool lets legendary studio sift through massive database of 3D scenes.

Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address — from blogs.nvidia.com by Gerardo Degaldo
Major NVIDIA Omniverse updates power 3D virtual worlds, digital twins and avatars, reliably boosted by August NVIDIA Studio Driver; #MadeInMachinima contest winner revealed.

What Is Direct and Indirect Lighting? — from blogs.nvidia.com by JJ Kim
In computer graphics, the right balance between direct and indirect lighting elevates the photorealism of a scene.

NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class — from blogs.nvidia.com by Gerardo Degaldo
Designed for creativity and speed, Studio laptops are the ultimate creative tool for aspiring 3D artists, video editors, designers and photographers.

Design in the Age of Digital Twins: A Conversation With Graphics Pioneer Donald Greenberg — from blogs.nvidia.com by Rick Merritt
From his Cornell office, home to a career of 54 years and counting, he shares with SIGGRAPH attendees his latest works in progress.

 
© 2024 | Daniel Christian