What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

OpenAI Says DALL-E Is Generating Over 2 Million Images a Day—and That’s Just Table Stakes — from singularityhub.com by Jason Dorrier

Excerpt:

The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.

Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.

And that was before OpenAI opened DALL-E 2 to everyone.

 


From DSC:
Further on down that Tweet is this example image — wow!
.

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

.


On the video side of things, also relevant/see:

Meta’s new text-to-video AI generator is like DALL-E for video — from theverge.com by James Vincent
Just type a description and the AI generates matching footage

A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta


From DSC:
Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?


 

HSF embraces the metaverse with new digital law course for students — from legalcheek.com by Thomas Connelly

Excerpt:

The global law firm has launched a series of free workshops exploring how lawyers help clients navigate novel legal and regulatory issues relating to techy-topics including the metaverse, non-fungible tokens (NFTs), robotics and artificial intelligence (AI).

From DSC:
This kind of thing needs to happen in law schools across many countries.

 

Also relevant/see the following post I created roughly a month ago:

In the USA, the perspectives of the ABA re: online-based learning — and their take on the pace of change — are downright worrisome.

In that posting I said:

For an industry in the 21st century whose main accreditation/governance body for law schools still won’t let more online learning occur without waivers…how can our nation expect future lawyers and law firms to be effective in an increasingly tech-enabled world?

The pace of the ABA is like that of a tortoise, while the pace of change is exponential

 

California Moves Forward to Allow Vital Records to be Issued on Blockchain — from coindesk.com by Jesse Hamilton
Governor Gavin Newsom signed a law [last] week that establishes a blockchain option for delivering individuals’ records, such as birth and marriage certificates


Speaking of blockchain, these next two resources comes from Roberto Ferraro’s weekly enewsletter:

Blockchain 101 – A Visual Demo — from andersbrownworth.com

Blockchain 101 – Part 2 – Public / Private Keys and Signing

 

AI/ML in EdTech: The Miracle, The Grind, and the Wall — from eliterate.us by Michael Feldstein

Excerpt:

Essentially, I see three stages in working with artificial intelligence and machine learning (AI/ML). I call them the miracle, the grind, and the wall. These stages can have implications for both how we can get seduced by these technologies and how we can get bitten by them. The ethical implications are important.

 
 

Apple just quietly gave us the golden key to unlock the Metaverse — from medium.com by Klas Holmlund; with thanks to Ori Inbar out on Twitter for this resource

Excerpt:

But the ‘Oh wow’ moment came when I pointed the app at a window. Or a door. Because with a short pause, a correctly placed 3D model of the window snapped in place. Same with a door. But the door could be opened or closed. RoomPlan did not care. It understands a door. It understands a chair. It understands a cabinet. And when it sees any of these things, it places a model of them, with the same dimensions, in the model.

Oh, the places you will go!
OK, so what will this mean to Metaverse building? Why is this a big deal? Well, to someone who is not a 3D modeler, it is hard to overstate what amount of work has to go into generating useable geometry. The key word, here, being useable. To be able to move around, exist in a VR space it has to be optimized. You’re not going to have a fun party if your dinner guests fall through a hole in reality. This technology will let you create a fully digital twin of any space you are in in the space of time it takes you to look around.

In a future Apple VR or AR headset, this technology will obviuosly be built in. You will build a VR capable digital twin of any space you are in just by wearing the headset. All of this is optimized.

Also with thanks to Ori Inbar:


Somewhat relevant/see:

“The COVID-19 pandemic spurred us to think creatively about how we can train the next generation of electrical construction workers in a scalable and cost-effective way,” said Beau Pollock, president and CEO of TRIO Electric. “Finding electrical instructors is difficult and time-consuming, and training requires us to use the same materials that technicians use on the job. The virtual simulations not only offer learners real-world experience and hands-on practice before they go into the field, they also help us to conserve resources in the process.”


 

This Uncensored AI Art Tool Can Generate Fantasies—and Nightmares — from wired.com by Will Knight
Open source project Stable Diffusion allows anyone to conjure images with algorithms, but some fear it will be used to create unethical horrors.

Excerpt:

Image generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Also relevant/see:

There’s a text-to-image AI art app for Mac now—and it will change everything — from fastcompany.com by Jesus Diaz
Diffusion Bee harnesses the power of the open source text-to-image AI Stable Diffusion, turning it into a one-click Mac App. Brace yourself for a new creativity Big Bang.


Speaking of AI, also see:

 

Top 5 Developments in Web 3.0 We Will See in the Next Five Years — from intelligenthq.com

Excerpt:

Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.

Here are the top 5 developments in web 3.0 expected in the coming five years.
.

 

European telco giants collaborate on 5G-powered holographic videocalls — from inavateonthenet.net

Excerpt:

Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.

Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market

Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality.
.

Top XR Vendors Majoring in Education for 2022 — from xrtoday.com

Excerpt:

Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.

In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.

With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.

 

Keynote Wrap-Up: NVIDIA CEO Unveils Next-Gen RTX GPUs, AI Workflows in the Cloud — from blogs.nvidia.com by Brian Caulfield
Kicking off GTC, Jensen Huang unveils advances in natural language understanding, the metaverse, gaming and AI technologies impacting industries from transportation and healthcare to finance and entertainment.

Excerpt (emphasis DSC):

New cloud services to support AI workflows and the launch of a new generation of GeForce RTX GPUs featured [on 9/20/22] in NVIDIA CEO Jensen Huang’s GTC keynote, which was packed with new systems, silicon, and software.

“Computing is advancing at incredible speeds, the engine propelling this rocket is accelerated computing, and its fuel is AI,” Huang said during a virtual presentation as he kicked off NVIDIA GTC.

Again and again, Huang connected new technologies to new products to new opportunities – from harnessing AI to delight gamers with never-before-seen graphics to building virtual proving grounds where the world’s biggest companies can refine their products.

Driving the deluge of new ideas, new products and new applications: a singular vision of accelerated computing unlocking advances in AI, which, in turn will touch industries around the world.

Also relevant/see:

 

Bring Real-Time 3D Into the Classroom, and Teach for the Future — from edsurge.com by Melissa Oldrin and Davis Hepnar

Excerpt:

Real-time 3D (RT3D) is redefining interactive content. No longer confined to the realm of video games, this technology now plays key roles in industries as wide-ranging as architecture, medicine, automotive, aerospace and film.

Demand is growing rapidly for developers, programmers and artists skilled in working with Unity—the leading platform for creating and operating real-time 3D content. As use cases expand, and the much-discussed metaverse takes shape, educators today have an opportunity to prepare their students for the technology careers of tomorrow.

Real-time 3D is a technology that creates three-dimensional models, environments and complete virtual worlds that can be rendered instantly. This content goes far beyond traditional formats like film, television and print because it isn’t static; it’s both immersive and interactive. And it offers incredibly lifelike graphics while giving users precise, immediate control over their experience. In doing so, RT3D creates endless possibilities for media production and engagement.

 

What if smart TVs’ new killer app was a next-generation learning-related platform? [Christian]

TV makers are looking beyond streaming to stay relevant — from protocol.com by Janko Roettgers and Nick Statt

A smart TV's main menu listing what's available -- application wise

Excerpts:

The search for TV’s next killer app
TV makers have some reason to celebrate these days: Streaming has officially surpassed cable and broadcast as the most popular form of TV consumption; smart TVs are increasingly replacing external streaming devices; and the makers of these TVs have largely figured out how to turn those one-time purchases into recurring revenue streams, thanks to ad-supported services.

What TV makers need is a new killer app. Consumer electronics companies have for some time toyed with the idea of using TV for all kinds of additional purposes, including gaming, smart home functionality and fitness. Ad-supported video took priority over those use cases over the past few years, but now, TV brands need new ways to differentiate their devices.

Turning the TV into the most useful screen in the house holds a lot of promise for the industry. To truly embrace this trend, TV makers might have to take some bold bets and be willing to push the envelope on what’s possible in the living room.

 


From DSC:
What if smart TVs’ new killer app was a next-generation learning-related platform? Could smart TVs deliver more blended/hybrid learning? Hyflex-based learning?
.

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

.

Or what if smart TVs had to do with delivering telehealth-based apps? Or telelegal/virtual courts-based apps?


 

Nurses to trial VR to free up time with patients — from inavateonthenet.net

Excerpt:

UK nurses are set to trial “virtual reality style” goggles to free up time with patients in home visits, transcribing appointments in real time and sharing footage for second opinions.

Nurses from the Northern Lincolnshire and Goole NHS Foundation Trust will trial the technology and be able to transcribe appointment notes directly to electronic records. This will allow nurses to cut down on administrative paperwork and free up more time for home visits.

From DSC:
I wonder if AR will be used in applications like these in the near future…?

 

McKinsey Technology Trends Outlook 2022

Excerpt:

Which technology trends matter most for companies in 2022? New analysis by the McKinsey Technology Council highlights the development, possible uses, and industry effects of advanced technologies.

McKinsey Technology Trends Outlook 2022

 
© 2024 | Daniel Christian