Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle
Excerpts (emphasis DSC):
Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.
…
Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.
…
Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.
From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.
By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?
Also see:
- Opinion: AI for good is often bad — from wired.com by Mark Latonero
Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.