How Facebook got addicted to spreading misinformation — from technologyreview.com by Karen Hao

Excerpt (emphasis DSC):

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect — from forbes.com by Nisha Talagala

5 trends for AI in 2021

 

How One State Managed to Actually Write Rules on Facial Recognition — from nytimes.com by Kashmir Hill
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.

 

Timnit Gebru’s Exit From Google Exposes a Crisis in AI — from wired.com by Alex Hanna and Meredith Whittaker
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

Excerpt:

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

 

Artificial intelligence will go mainstream in 2021 — from manilatimes.net by Noemi Lardizabal-Dado; with thanks to Matthew Lamons for this resource

Excerpt:

In his December 21 Forbes website article, titled “Why Covid Will Make AI Go Mainstream In 2021,” data scientist Ganes Kesari predicts AI will transform 2021 by accelerating pharmaceutical drug discovery beyond Covid-19. He says the face of telecommuting would change, and that AI would transform edge computing and make devices around us truly intelligent.

Artificial Intelligence in 2021: Endless Opportunities and Growth — from analyticsinsight.net by Priya Dialani; with thanks to Matthew Lamons for this resource

Excerpts:

In 2021, the grittiest of organizations will push AI to new boondocks, for example, holographic meetings for telecommunication  and on-demand, personalised manufacturing. They will gamify vital planning, incorporate simulations in the meeting room and move into intelligent edge experiences.

According to Rohan Amin, the Chief Information Officer at Chase, “In 2021, we will see more refined uses of machine learning and artificial intelligence across industries, including financial services. There will be more noteworthy incorporation of AI/ML models and abilities into numerous business operations and processes to drive improved insights and better serve clients.”

From DSC:
I’m a bit more cautious when facing the growth of AI in our world, in our lives, in our society. I see some very positive applications (such as in healthcare and in education), but I’m also concerned about techs involved with facial recognition and other uses of AI that could easily become much more negative and harmful to us in the future.

 

The Hack That Shook Washington -- by Tom Krazit

The Hack That Shook Washington — from protocol.com by Tom Krazit

Excerpt:

A cybersecurity nightmare is unfolding in the nation’s capital as the fallout from one of the most brazen security breaches in recent memory continues to spread throughout several government agencies.

The internal networks belonging to no less than five government agencies, including the Defense and State Departments, were left wide open to a group of hackers believed to be working on behalf of Russia for several months this year, according to multiple reports and tacit confirmation of a breach from government officials earlier this week. This incident was especially scary because no one seems to know exactly how much data was accessed or stolen, and because those affected may never know.

The incident also highlights the vulnerability of the software supply chain, an important part of modern application development. Here’s what we know:

Addendum on 12/19/20:

Brad Smith, President of Microsoft" This is a moment of reckoning (referring to the Solar Winds cyberattack)

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 

Designed to Deceive: Do These People Look Real to You? — from nytimes.com by Kashmir Hill and Jeremy White
These people may look familiar, like ones you’ve seen on Facebook or Twitter. Or people whose product reviews you’ve read on Amazon, or dating profiles you’ve seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer. And the technology that makes them is improving at a startling pace.

Is this humility or hubris? Do we place too little value in human intelligence — or do we overrate it, assuming we are so smart that we can create things smarter still?

 

Report: There’s More to Come for AI in Ed — from thejournal.com by Dian Schaffhauser

Excerpts:

The group came up with dozens of “opportunities” for AI in education, from extending what teachers can do to better understanding human learning:

  • Using virtual instructors to free up “personalization time” for classroom teachers;
  • Offloading the “cognitive load” of teaching;
  • Providing “job aids” for teachers;
  • Identifying the links between courses, credentials, degrees and skills;
  • “Revolutionizing” testing and assessment;
  • Creating new kinds of “systems of support”;
  • Helping with development of “teaching expertise”; and
  • Better understanding human learning through “modeling and building interfaces” in AI.

But contributors also offered just as many barriers to success:

  • Differences in the way teachers teach would require “different job aids”;
  • Teachers would fear losing their jobs;
  • Data privacy concerns;
  • Bias worries;
  • Dealing with unrealistic expectations and fears about AI pushed in “popular culture”;
  • Lack of diversity in gender, ethnicity and culture in AI projects; and
  • Smart use of data would require more teacher training.
 

From DSC:
The good…

London A.I. lab claims breakthrough that could accelerate drug discovery — from nytimes.com by
Researchers at DeepMind say they have solved “the protein folding problem,” a task that has bedeviled scientists for more than 50 years

This long-sought breakthrough could accelerate the ability to understand diseases, develop new medicines and unlock mysteries of the human body.

…and the not so good…

 

From DSC:
Who needs to be discussing/debating “The Social Dilemma” movie? Whether one agrees with the perspectives put forth therein or not, the discussion boards out there should be lighting up in the undergraduate areas of Computer Science (especially Programming), Engineering, Business, Economics, Mathematics, Statistics, Philosophy, Religion, Political Science, Sociology, and perhaps other disciplines as well. 

To those starting out the relevant careers here…just because we can, doesn’t mean we should. Ask yourself not whether something CAN be developed, but *whether it SHOULD be developed* and what the potential implications of a technology/invention/etc. might be. I’m not aiming to take a position here. Rather, I’m trying to promote some serious reflection for those developing our new, emerging technologies and our new products/services out there.

Who needs to be discussing/debating The Social Dilemna movie?

 

 

Facial Recognition Start-Up Mounts a First Amendment Defense — from nytimes.com by Kashmir Hill
Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

Excerpts:

Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.

Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies. When an officer uploads a photo or a video image containing a person’s face, the app tries to match the likeness and provides other photos of that person that can be found online.

From DSC:
Many, if not all of us, are now required to be lifelong learners in order to stay marketable. I was struck by that when I read the following excerpt from the above article:

“I’m learning the language,” Mr. Abrams said. “I’ve never used the words ‘facial biometric algorithms’ until this phone call.”

 
 

IBM, Amazon, and Microsoft abandon law enforcement face recognition market — from which-50.com by Andrew Birmingham

Excerpt:

Three global tech giants — IBM, Amazon, and Microsoft — have all announced that they will no longer sell their face recognition technology to police in the USA, though each announcement comes with its own nuance.

The new policy comes in the midst of ongoing national demonstrations in the US about police brutality and more generally the subject of racial inequality in the country under the umbrella of the Black Lives Matter movement.

From DSC:
While I didn’t read the fine print (so I don’t know all of the “nuances” they are referring to) I see this as good news indeed! Well done whomever at those companies paused, and thought…

 

…just because we can…

just because we can does not mean we should


…doesn’t mean we should.

 

just because we can does not mean we should

Addendum on 6/18/20:

  • Why Microsoft and Amazon are calling on Congress to regulate facial recognition tech — from finance.yahoo.com by Daniel HowleyExcerpt:
    The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies.“If these technologies were to be deployed, I think you cannot do it in the absence of legislation,” explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
 

From DSC:
As the technologies get more powerful, so do the ramifications/consequences. And here’s the kicker. Just like we’re seeing with trying to deal with systemic racism in our country…like it or not, it all comes back to the state of the hearts and minds out there. I’m not trying to be preachy or to pretend that I’m better than anyone. I’m a sinner. I know that full well. And I view all of us as equals trying to survive and to make our way in this tough-to-live-in world. But the unfortunate truth is that technologies are tools, and how the tools are used depends upon one’s heart.

Proverbs 4:23 New International Version (NIV) — from biblegateway.com

23 Above all else, guard your heart,
    for everything you do flows from it.

 

This startup is using AI to give workers a “productivity score” — from technologyreview.com by Will Douglas
Enaible is one of a number of new firms that are giving employers tools to help keep tabs on their employees—but critics fear this kind of surveillance undermines trust.

Excerpt:

In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.

Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.”

Machine-learning algorithms also encode hidden bias in the data they are trained on. Such bias is even harder to expose when it’s buried inside an automated system. If these algorithms are used to assess an employee’s performance, it can be hard to appeal an unfair review or dismissal. 

 

 
© 2025 | Daniel Christian