From DSC:
Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:


Our World Shaken, Not Stirred: Synthetic entertainment, hybrid social experiences, syncing ourselves with apps, and more. — from implications.com by Scott Belsky
Things will get weird. And exciting.

Excerpts:

Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.

A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.

Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.


Also relevant/see:

Digital storytelling with generative AI: notes on the appearance of #AICinema — from bryanalexander.org by Bryan Alexander

Excerpt:

This is how I viewed a fascinating article about the so-called #AICinema movement.  Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland.  It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.

Bryan links to:

Artists astound with AI-generated film stills from a parallel universe — from arstechnica.com by Benj Edwards
A Q&A with “synthographer” Julie Wieland on the #aicinema movement.

An AI-generated image from an #aicinema still series called Vinyl Vengeance by Julie Wieland, created using Midjourney.


From DSC:
How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?

Hmmm…should be interesting!


 

Meet Adobe Firefly. Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Meet Adobe Firefly. — from adobe.com
Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.

Generative AI made for creators.
With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.


Also relevant/see:

Gen-2 from runway Research -- the next step forward for generative AI

Gen-2: The Next Step Forward for Generative AI — from research.runwayml.com
A multi-modal AI system that can generate novel videos with text, images, or video clips.

Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.

 

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at NVIDIA's GTC -- keynote was held on March 21 2023

Explore Breakthroughs in AI, Accelerated Computing, and Beyond at GTC — from nvidia.com
The Conference for the Era of AI and the Metaverse

 


Addendums on 3/22/23:

Generative AI for Enterprises — from nvidia.com
Custom-built for a new era of innovation and automation.

Excerpt:

Impacting virtually every industry, generative AI unlocks a new frontier of opportunities—for knowledge and creative workers—to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications.

NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud—the AI supercomputer.

 
 

Mixed media online project serves as inspiration for student journalists — from jeadigitalmedia.org by Michelle Balmeo

Excerpt:

If you’re on the hunt for inspiration, go check out Facing Life: Eight stories of life after life in California’s prisons.

This project, created by Pendarvis Harshaw and Brandon Tauszik, has so many wonderful and original storytelling components, it’s the perfect model for student journalists looking for ways to tell important stories online.

Facing Life -- eight stories of life after life in California's prisons

 

AI starter tools for video content creation — from techthatmatters.beehiiv.com by Harsh Makadia

Excerpt:

One of the most exciting applications of AI is in the realm of content creation. What if I told you there are tools to generate videos in mins?

Try these tools today:

  • Supercreator AI: Create short form videos 10x faster
  • Lumen5: Automatically turn blog posts into videos
  • InVideo: Idea to YouTube video
  • Synthesia: Create videos from plain text in minutes
  • Narakeet: Get a professionally sounding audio or video in minutes
  • Movio: Create engaging video content
 

 

Along the lines of tools, also see:

6 Apps and Websites to Make Video Journals — from classtechtips.com by Monica Burns

Excerpt:

Have you made video journals with your students? Earlier this year on the blog, I shared some of the reasons why this medium is worth considering. Today on the blog, we’ll look at six apps and websites to make video journals alongside a few more reasons why video journals are worth considering.

There are several reasons why a teacher might introduce video journals to their students. First, video journals can be a powerful tool for self-reflection and personal growth. By creating and reflecting on videos of themselves, students can better understand their own thoughts, feelings, and behaviors and identify areas for improvement.

Video journals can also provide a creative outlet for students to express themselves. Students may enjoy experimenting with different video-editing techniques and sharing their work with their peers. This experience using video tools can transfer to additional learning experiences, especially because video journals can help students practice their communication skills in speaking and visual storytelling.


And speaking of tools and technologies, also see:

A Valentine for Education Technology — from campustechnology.com by Mary Grush and Gardner Campbell

Excerpt:

However, in making some adjustments for an online experience, I soon realized that I was able, actually, to enhance aspects of the Read-a-thon and thus the course as well. For the first time, I could choose to invite the wider Milton community to the event, encouraging people to join from many different points across the country and potentially around the globe!

I like certain kinds of management technologies that help me with record keeping and organization. But I don’t love them. I love technologies of communication because, I think, they’re at the heart of teaching and learning — creating opportunities for human beings to think together, to study something together.

 

From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.



Also see:


Also somewhat relevant, see:

 

From DSC:
Our son recently took a 3-day intensive course on the Business of Acting. It was offered by the folks at “My College Audition” — and importantly, the course was not offered by the university where he is currently working on a BFA in Acting. By the way, aspiring performing arts students may find this site very beneficial/helpful as well. (Example blog posting here.)

mycollegeaudition.com/

The course was actually three hours of learning on a Sunday night, a Monday night, and a Tuesday night from 6-9pm.

The business of acting -- a 3-day virtual intensive course from mycollegeaudition.com

He learned things that he mentioned have not been taught in his undergrad program (at least not so far). When I asked him what he liked most about the course, he said:

  • These people are out there doing this (DSC insert: To me, this sounds like the use of adjunct faculty in higher ed)
  • There were 9 speakers in the 9 hours of classtime
  • They relayed plenty of resources that were very helpful and practical. He’s looking forward to pursuing these leads further.

He didn’t like that there were no discussion avenues/forums available. And as a paying parent, I didn’t like that we had to pay for yet another course and content that he wasn’t getting at his university. It may be that the university that he’s studying at will offer such a course later in the curriculum. But after two years of college experience, he hasn’t come across anything this practical and he is eagerly seeking out this type of practical/future-focused information. In fact, it’s critical to him staying with acting…or not. He needs this information sooner in his program.

It made me reflect on the place of adjunct faculty within higher education — folks who are out there “doing” what they are teaching about. They tend to be more up-to-date in their real-world knowledge. Sabbaticals are helpful in this regard for full-time faculty, but they don’t come around nearly enough to keep one’s practical, career-oriented knowledgebase up-to-date.

Again, this dilemma is to be expected, given our current higher education learning ecosystem. Faculties’ plates are full. They don’t have time to pursue this kind of endeavor in addition to their daily responsibilities. Staff aren’t able to be out there “doing” these things either.

This brings me back to design thinking. We’ve got to come up with better ways of offering student-centered education, programming, and content/resources.

My son walked away shaking his head a bit regarding his current university. At a time when students and families are questioning the return on their investments in traditional institutions of higher education, this issue needs to be taken very seriously. 


Also potentially relevant for some of the performing arts students out there:


 

Making a Digital Window Wall from TVs — from theawesomer.com

Drew Builds Stuff has an office in the basement of his parents’ house. Because of its subterranean location, it doesn’t get much light. To brighten things up, he built a window wall out of three 75? 4K TVs, resulting in a 12-foot diagonal image. Since he can load up any video footage, he can pretend to be anywhere on Earth.

From DSC:
Perhaps some ideas here for learning spaces!

 

From DSC:
I’m very proud of our sister Sue Ellen — who worked hard to bring this idea/vision/exhibit to reality.

Sue Ellen Christian


Kalamazoo Valley Museum explores media & its messages — from woodtv.com by Jessica Jurczak

Excerpt:

GRAND RAPIDS, Mich. (WOOD) – We are constantly on the lookout for fun ideas that also involve learning and one of our go-to spots is the Kalamazoo Valley Museum! There’s a big exhibition there now called “Wonder Media: Ask the Questions!” As we all know, we’re bombarded everyday with messages from all types of media: TV, movies, social media and this exhibit encourages us to stop and evaluate some of those messages. The Kalamazoo Valley Museum also has a planetarium, and vast science and history galleries and today, we’re taking you inside!

 

 

What might the ramifications be for text-to-everything? [Christian]

From DSC:

  • We can now type in text to get graphics and artwork.
  • We can now type in text to get videos.
  • There are several tools to give us transcripts of what was said during a presentation.
  • We can search videos for spoken words and/or for words listed within slides within a presentation.

Allie Miller’s posting on LinkedIn (see below) pointed these things out as well — along with several other things.



This raises some ideas/questions for me:

  • What might the ramifications be in our learning ecosystems for these types of functionalities? What affordances are forthcoming? For example, a teacher, professor, or trainer could quickly produce several types of media from the same presentation.
  • What’s said in a videoconference or a webinar can already be captured, translated, and transcribed.
  • Or what’s said in a virtual courtroom, or in a telehealth-based appointment. Or perhaps, what we currently think of as a smart/connected TV will give us these functionalities as well.
  • How might this type of thing impact storytelling?
  • Will this help someone who prefers to soak in information via the spoken word, or via a podcast, or via a video?
  • What does this mean for Augmented Reality (AR), Mixed Reality (MR), and/or Virtual Reality (VR) types of devices?
  • Will this kind of thing be standard in the next version of the Internet (Web3)?
  • Will this help people with special needs — and way beyond accessibility-related needs?
  • Will data be next (instead of typing in text)?

Hmmm….interesting times ahead.

 

Artificial Intelligence in Video Production — from provideocoalition.com by Iain Anderson

Introduction

Artificial Intelligence has promised so much for so long, and in many ways, it’s been less than perfect. Siri doesn’t quite understand what you’re asking; Google Translate can’t yet seamlessly bridge the gap between languages, and navigation software still doesn’t always offer an accurate route. But it’s getting pretty good. While a variety of AI-based technologies have been improving steadily over the past several years, it’s the recent giant strides made in image generation that’s potentially groundbreaking for post production professionals. Here, I’ll take you through the ideas behind the tech, along with specific examples of how modern, smart technology will change your post workflow tomorrow, or could today.

Also relevant/see:

 

OpenAI Says DALL-E Is Generating Over 2 Million Images a Day—and That’s Just Table Stakes — from singularityhub.com by Jason Dorrier

Excerpt:

The venerable stock image site, Getty, boasts a catalog of 80 million images. Shutterstock, a rival of Getty, offers 415 million images. It took a few decades to build up these prodigious libraries.

Now, it seems we’ll have to redefine prodigious. In a blog post last week, OpenAI said its machine learning algorithm, DALL-E 2, is generating over two million images a day. At that pace, its output would equal Getty and Shutterstock combined in eight months. The algorithm is producing almost as many images daily as the entire collection of free image site Unsplash.

And that was before OpenAI opened DALL-E 2 to everyone.

 


From DSC:
Further on down that Tweet is this example image — wow!
.

A photo of a quaint flower shop storefront with a pastel green and clean white facade and open door and big window

.


On the video side of things, also relevant/see:

Meta’s new text-to-video AI generator is like DALL-E for video — from theverge.com by James Vincent
Just type a description and the AI generates matching footage

A sample video generated by Meta’s new AI text-to-video model, Make-A-Video. The text prompt used to create the video was “a teddy bear painting a portrait.” Image: Meta


From DSC:
Hmmm…I wonder…how might these emerging technologies impact copyrights, intellectual property, and/or other types of legal matters and areas?


 
© 2024 | Daniel Christian