The era of easily faked, AI-generated photos is quickly emerging — from qz.com by Dave Gershgorn

Excerpt (emphasis DSC):

Until this month, it seemed that GAN-generated images [where GAN stands for “generative adversarial networks”] that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

 

From DSC:
So AI can now generate realistic photos (i.e., image creation/manipulation). And then there’s Adobe’s VoCo Project, a sort of a Photoshop for audio manipulation plus other related technologies out there:

 

So I guess it’s like the first article concludes:

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

…and perhaps we’ll need to add, “we shouldn’t trust everything we hear either.” But how will the average person with average tools know the real deal? The concept of watermarking visuals/audio may be increasingly involved. From the ending of bbc.com article:

For its part, Adobe has talked of its customers using Voco to fix podcast and audio book recordings without having to rebook presenters or voiceover artists.

But a spokeswoman stressed that this did not mean its release was imminent.

“[It] may or may not be released as a product or product feature,” she told the BBC.

“No ship date has been announced.”

In the meantime, Adobe said it was researching ways to detect use of its software.

“Think about watermarking detection,” Mr Jin said at the demo, referring to a method used to hide identifiers in images and other media.

 

But again, we see that technology often races ahead. “Look at what we can do!”  But then the rest of society — such as developing laws, policies, questions about whether we should roll out such technologies, etc. — needs time to catch up. Morals and ethics do come into play here — as trust levels are most assuredly at stake.

Another relevant article/topic/example of this is listed below. (Though I’m not trying to say that we shouldn’t pursue self-driving cars. Rather, the topic serves as another example of technologies racing ahead while it takes a while for the rest of us/society to catch up with them).