5 good tools to create whiteboard animations — from educatorstechnology.com

Excerpt:

In short, whiteboard animation (also called video scribing or animated doodling) is a video clip in which the recorder records the process of drawing on a whiteboard while using audio comment. The final result is a beautiful synchronization of the drawings and the audio feedback. In education, whiteboard animation videos  are used in language teaching/learning, in professional development sessions, to create educational tutorials and presentations and many more. In today’s post, we are sharing with you some good web tools you can use to create whiteboard animation videos.

 

 

 

From DSC:
Is this only on Pixel 4? If so, too bad. It has a lot of potential — especially for students and lecture capture!

Speaking of lecture capture…Panopto offers an incredible search feature for searching text, audio, and video!

“With Panopto, you can search through your video library the same way you’d search across the internet, or through your email.

  • By any keyword spoken in your videos
  • By any word that ever appears on-screen or anywhere else in your video
  • By traditional and advanced metadata, including tags and titles, viewer notes and comments, and even speakers notes from your PowerPoint slides.
  • Panopto enables you to search across every video in your library…and get specific results that fast-forward to the exact moment the keyword occurs in your video.”

 

 

Three threats posed by deepfakes that technology won’t solve — from technologyreview.com by Angela Chen
As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.

Excerpt:

3) Problem: Deepfake detection is too late to help victims
With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

 

Per Jane Hart on LinkedIn:

Top 200 Tools for Learning 2019 is now published, together with:

PLUS analysis of how these tools are being used in different context, new graphics, and updated comments on the tools’ pages that show how people are using the tools.

 

 

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Screen Mirroring, Screencasting and Screen Sharing in Higher Education — from edtechmagazine.com by Derek Rice
Digital learning platforms let students and professors interact through shared videos and documents.

Excerpt (emphasis DSC):

Active learning, collaboration, personalization, flexibility and two-way communication are the main factors driving today’s modern classroom design.

Among the technologies being brought to bear in academic settings are those that enable screen mirroring, screencasting and screen sharing, often collectively referred to as wireless presentation solutions.

These technologies are often supported by a device and app that allow users, both students and professors, to easily share content on a larger screen in a classroom.

“The next best thing to a one-to-one conversation is to be able to share what the students create, as part of the homework or class activity, or communicate using media to provide video evidence of class activities and enhance and build out reading, writing, speaking, listening, language and other skills,” says Michael Volpe, marketing manager for IOGEAR.

 

An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft — from washingtonpost.com by Drew Harwell

Excerpt:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

 

From DSC:
Needless to say, this is very scary stuff here! Now what…? Who in our society should get involved to thwart this kind of thing?

  • Programmers?
  • Digital audio specialists?
  • Legislators?
  • Lawyers?
  • The FBI?
  • Police?
  • Other?


Addendum on 9/12/19:

 
 

The coming deepfakes threat to businesses — from axios.com by Kaveh Waddell and Jennifer Kingson

Excerpt:

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company’s stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company’s share price would crumple.

What’s happening: For all the talk about fake videos, it’s deepfake audio that has emerged as the first real threat to the private sector.

 

From DSC…along these same lines see:

 

Pearson moves away from print textbooks — from campustechnology.com by Rhea Kelly

Excerpt:

All of Pearson’s 1,500 higher education textbooks in the U.S. will now be “digital first.” The company announced its big shift away from print today, calling the new approach a “product as a service model and a generational business shift to be much more like apps, professional software or the gaming industry.”

The digital format will allow Pearson to update textbooks on an ongoing basis, taking into account new developments in the field of study, new technologies, data analytics and efficacy research, the company said in a news announcement. The switch to digital will also lower the cost for students: The average e-book price will be $40, or $79 for a “full suite of digital learning tools.”

 

Microsoft’s new AI wants to help you crush your next presentation — from pcmag.com by Jake Leary
PowerPoint is receiving a slew of updates, including one that aims to help you improve your public speaking.

Excerpt:

Microsoft [on 6/18/19] announced several PowerPoint upgrades, the most notable of which is an artificial intelligence tool that aims to help you overcome pre-presentation jitters.

The Presenter Coach AI listens to you practice and offers real-time feedback on your pace, word choice, and more. It will, for instance, warn you if you’re using filler words like “umm” and “ahh,” profanities, non-inclusive language, or reading directly from your slides. At the end of your rehearsal, it provides a report with tips for future attempts. Presenter Coach arrives later this summer.

 
 

Stanford engineers make editing video as easy as editing text — from news.stanford.edu by Andrew Myers
A new algorithm allows video editors to modify talking head videos as if they were editing text – copying, pasting, or adding and deleting words.

Excerpts:

In television and film, actors often flub small bits of otherwise flawless performances. Other times they leave out a critical word. For editors, the only solution so far is to accept the flaws or fix them with expensive reshoots.

Imagine, however, if that editor could modify video using a text transcript. Much like word processing, the editor could easily add new words, delete unwanted ones or completely rearrange the pieces by dragging and dropping them as needed to assemble a finished video that looks almost flawless to the untrained eye.

The work could be a boon for video editors and producers but does raise concerns as people increasingly question the validity of images and videos online, the authors said. However, they propose some guidelines for using these tools that would alert viewers and performers that the video has been manipulated.

 

Addendum on 6/13/19:

 

An image created from a fake video of former president Barack Obama displays elements of facial mapping used in new technology that allows users to create convincing fabricated footage of real people, known as “deepfakes.” (AP)

 

 

Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2020 | Daniel Christian