How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 

How Facebook got addicted to spreading misinformation — from technologyreview.com by Karen Hao

Excerpt (emphasis DSC):

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect — from forbes.com by Nisha Talagala

5 trends for AI in 2021

 

How One State Managed to Actually Write Rules on Facial Recognition — from nytimes.com by Kashmir Hill
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.

 

Timnit Gebru’s Exit From Google Exposes a Crisis in AI — from wired.com by Alex Hanna and Meredith Whittaker
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

Excerpt:

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

 

Artificial intelligence will go mainstream in 2021 — from manilatimes.net by Noemi Lardizabal-Dado; with thanks to Matthew Lamons for this resource

Excerpt:

In his December 21 Forbes website article, titled “Why Covid Will Make AI Go Mainstream In 2021,” data scientist Ganes Kesari predicts AI will transform 2021 by accelerating pharmaceutical drug discovery beyond Covid-19. He says the face of telecommuting would change, and that AI would transform edge computing and make devices around us truly intelligent.

Artificial Intelligence in 2021: Endless Opportunities and Growth — from analyticsinsight.net by Priya Dialani; with thanks to Matthew Lamons for this resource

Excerpts:

In 2021, the grittiest of organizations will push AI to new boondocks, for example, holographic meetings for telecommunication  and on-demand, personalised manufacturing. They will gamify vital planning, incorporate simulations in the meeting room and move into intelligent edge experiences.

According to Rohan Amin, the Chief Information Officer at Chase, “In 2021, we will see more refined uses of machine learning and artificial intelligence across industries, including financial services. There will be more noteworthy incorporation of AI/ML models and abilities into numerous business operations and processes to drive improved insights and better serve clients.”

From DSC:
I’m a bit more cautious when facing the growth of AI in our world, in our lives, in our society. I see some very positive applications (such as in healthcare and in education), but I’m also concerned about techs involved with facial recognition and other uses of AI that could easily become much more negative and harmful to us in the future.

 

The Hack That Shook Washington -- by Tom Krazit

The Hack That Shook Washington — from protocol.com by Tom Krazit

Excerpt:

A cybersecurity nightmare is unfolding in the nation’s capital as the fallout from one of the most brazen security breaches in recent memory continues to spread throughout several government agencies.

The internal networks belonging to no less than five government agencies, including the Defense and State Departments, were left wide open to a group of hackers believed to be working on behalf of Russia for several months this year, according to multiple reports and tacit confirmation of a breach from government officials earlier this week. This incident was especially scary because no one seems to know exactly how much data was accessed or stolen, and because those affected may never know.

The incident also highlights the vulnerability of the software supply chain, an important part of modern application development. Here’s what we know:

Addendum on 12/19/20:

Brad Smith, President of Microsoft" This is a moment of reckoning (referring to the Solar Winds cyberattack)

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 

Designed to Deceive: Do These People Look Real to You? — from nytimes.com by Kashmir Hill and Jeremy White
These people may look familiar, like ones you’ve seen on Facebook or Twitter. Or people whose product reviews you’ve read on Amazon, or dating profiles you’ve seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer. And the technology that makes them is improving at a startling pace.

Is this humility or hubris? Do we place too little value in human intelligence — or do we overrate it, assuming we are so smart that we can create things smarter still?

 

Report: There’s More to Come for AI in Ed — from thejournal.com by Dian Schaffhauser

Excerpts:

The group came up with dozens of “opportunities” for AI in education, from extending what teachers can do to better understanding human learning:

  • Using virtual instructors to free up “personalization time” for classroom teachers;
  • Offloading the “cognitive load” of teaching;
  • Providing “job aids” for teachers;
  • Identifying the links between courses, credentials, degrees and skills;
  • “Revolutionizing” testing and assessment;
  • Creating new kinds of “systems of support”;
  • Helping with development of “teaching expertise”; and
  • Better understanding human learning through “modeling and building interfaces” in AI.

But contributors also offered just as many barriers to success:

  • Differences in the way teachers teach would require “different job aids”;
  • Teachers would fear losing their jobs;
  • Data privacy concerns;
  • Bias worries;
  • Dealing with unrealistic expectations and fears about AI pushed in “popular culture”;
  • Lack of diversity in gender, ethnicity and culture in AI projects; and
  • Smart use of data would require more teacher training.
 

From DSC:
The good…

London A.I. lab claims breakthrough that could accelerate drug discovery — from nytimes.com by
Researchers at DeepMind say they have solved “the protein folding problem,” a task that has bedeviled scientists for more than 50 years

This long-sought breakthrough could accelerate the ability to understand diseases, develop new medicines and unlock mysteries of the human body.

…and the not so good…

 

From DSC:
Who needs to be discussing/debating “The Social Dilemma” movie? Whether one agrees with the perspectives put forth therein or not, the discussion boards out there should be lighting up in the undergraduate areas of Computer Science (especially Programming), Engineering, Business, Economics, Mathematics, Statistics, Philosophy, Religion, Political Science, Sociology, and perhaps other disciplines as well. 

To those starting out the relevant careers here…just because we can, doesn’t mean we should. Ask yourself not whether something CAN be developed, but *whether it SHOULD be developed* and what the potential implications of a technology/invention/etc. might be. I’m not aiming to take a position here. Rather, I’m trying to promote some serious reflection for those developing our new, emerging technologies and our new products/services out there.

Who needs to be discussing/debating The Social Dilemna movie?

 

 

Facial Recognition Start-Up Mounts a First Amendment Defense — from nytimes.com by Kashmir Hill
Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

Excerpts:

Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.

Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies. When an officer uploads a photo or a video image containing a person’s face, the app tries to match the likeness and provides other photos of that person that can be found online.

From DSC:
Many, if not all of us, are now required to be lifelong learners in order to stay marketable. I was struck by that when I read the following excerpt from the above article:

“I’m learning the language,” Mr. Abrams said. “I’ve never used the words ‘facial biometric algorithms’ until this phone call.”

 
 

IBM, Amazon, and Microsoft abandon law enforcement face recognition market — from which-50.com by Andrew Birmingham

Excerpt:

Three global tech giants — IBM, Amazon, and Microsoft — have all announced that they will no longer sell their face recognition technology to police in the USA, though each announcement comes with its own nuance.

The new policy comes in the midst of ongoing national demonstrations in the US about police brutality and more generally the subject of racial inequality in the country under the umbrella of the Black Lives Matter movement.

From DSC:
While I didn’t read the fine print (so I don’t know all of the “nuances” they are referring to) I see this as good news indeed! Well done whomever at those companies paused, and thought…

 

…just because we can…

just because we can does not mean we should


…doesn’t mean we should.

 

just because we can does not mean we should

Addendum on 6/18/20:

  • Why Microsoft and Amazon are calling on Congress to regulate facial recognition tech — from finance.yahoo.com by Daniel HowleyExcerpt:
    The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies.“If these technologies were to be deployed, I think you cannot do it in the absence of legislation,” explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
 
© 2025 | Daniel Christian