A couple of items from the May 2021 Inavate edition:

A very sharp learning space right here!

Sharp learning space full of AV equipment at Southampton University, UK

 

 

This Researcher Says AI Is Neither Artificial nor Intelligent — from wired.com by Tom Simonite
Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI.

 

Shhhh, they’re listening: Inside the coming voice-profiling revolution — from fastcompany.com by Josephy Turow
Marketers are on the verge of using AI-powered technology to make decisions about who you are and what you want based purely on the sound of your voice.

Excerpt:

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

From DSC:
Hhhhmmm….

 

This is an abstract picture of a person's head made of connections peering sideways -- it links to Artificial intelligence and the future of national security from ASU

Artificial intelligence and the future of national security — from news.asu.edu

Excerpt:

Artificial intelligence is a “world-altering” technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

Also see:

EU Set to Ban Surveillance, Start Fines Under New AI Rules — from bloomberg.com by Natalia Drozdiak

Excerpt:

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.

Also see:

Wrongfully arrested man sues Detroit police over false facial recognition match — from washingtonpost.com by Drew Harwell
The case could fuel criticism of police investigators’ use of a controversial technology that has been shown to perform worse on people of color

Excerpts:

A Michigan man has sued Detroit police after he was wrongfully arrested and falsely identified as a shoplifting suspect by the department’s facial recognition software in one of the first lawsuits of its kind to call into question the controversial technology’s risk of throwing innocent people in jail.

Robert Williams, a 43-year-old father in the Detroit suburb of Farmington Hills, was arrested last year on charges he’d taken watches from a Shinola store after police investigators used a facial recognition search of the store’s surveillance-camera footage that identified him as the thief.

Prosecutors dropped the case less than two weeks later, arguing that officers had relied on insufficient evidence. Police Chief James Craig later apologized for what he called “shoddy” investigative work. Williams, who said he had been driving home from work when the 2018 theft had occurred, was interrogated by detectives and held in custody for 30 hours before his release.

Williams’s attorneys did not make him available for comment Tuesday. But Williams wrote in The Washington Post last year that the episode had left him deeply shaken, in part because his young daughters had watched him get handcuffed in his driveway and put into a police car after returning home from work.

“How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” he wrote. “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.”

Addendum on 4/20/21:

 

Pro:

AI-powered chatbots automate IT help at Dartmouth — from edscoop.com by Ryan Johnston

Excerpt:

To prevent a backlog of IT requests and consultations during the coronavirus pandemic, Dartmouth College has started relying on AI-powered chatbots to act as an online service desk for students and faculty alike, the school said Wednesday.

Since last fall, the Hanover, New Hampshire, university’s roughly 6,600 students and 900 faculty have been able to consult with “Dart” — the virtual assistant’s name — to ask IT or service-related questions related to the school’s technology. More than 70% of the time, their question is resolved by the chatbot, said Muddu Sudhakar, the co-founder and CEO of Aisera, the company behind the software.

Con:

The Foundations of AI Are Riddled With Errors — from wired.com by Will Knight
The labels attached to images used to train machine-vision systems are often wrong. That could mean bad decisions by self-driving cars and medical algorithms.

Excerpt:

“What this work is telling the world is that you need to clean the errors out,”says Curtis Northcutt, a PhD student at MIT who led the new work.“Otherwise the models that you think are the best for your real-world business problem could actually be wrong.”

 

 

The metaverse: real world laws give rise to virtual world problems — from cityam.com by Gregor Pryor

Legal questions
Like many technological advances, from the birth of the internet to more modern-day phenomena such as the use of big data and artificial intelligence (AI), the metaverse will in some way challenge the legal status quo.

Whilst the growth and adoption of the metaverse will raise age-old legal questions, it will also generate a number of unique legal and regulatory obstacles that need to be overcome.

From DSC:
I’m posting this because this is another example of why we have to pick up the pace within the legal realm. Organizations like the American Bar Association (ABA) are going to have to pick up the pace big time. Society has been being impacted by a variety of emerging technologies such as these. And such changes are far from being over. Law schools need to assess their roles and responsibilities in this new world as well.

Addendum on 3/29/21:
Below are some more examples from Jason Tashea’s “The Justice Tech Download” e-newsletter:

  • Florida prisons buy up location data from data brokers. (Techdirt) A prison mail surveillance company keeps tabs on those on the outside, too. (VICE)
  • Police reform requires regulating surveillance tech. (Patch) (h/t Rebecca Williams) A police camera that never tires stirs unease at the US First Circuit Court of Appeals. (Courthouse News)
  • A Florida sheriff’s office was sued for using its predictive policing program to harass residents. (Techdirt)
  • A map of e-carceration in the US. (Media Justice) (h/t Upturn)
  • This is what happens when ICE asks Google for your user information. (Los Angeles Times)
  • Data shows the NYPD seized 55,000 phones in 2020, and it returned less than 35,000 of them. (Techdirt)
  • The SAFE TECH Act will make the internet less safe for sex workers. (OneZero)
  • A New York lawmaker wants to ban the use of armed robots by police. (Wired)
  • A look at the first wave of government accountability of algorithms. (AI Now Institute) The algorithmic auditing trap. (OneZero)
  • The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making. (Association for Computing Machinery)
  • A new open dataset has 510 commercial legal contracts with 13,000+ labels. (Atticus Project)
  • JusticeText co-founder shares her experience building tech for public defenders. (Law360)

 

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 

How Facebook got addicted to spreading misinformation — from technologyreview.com by Karen Hao

Excerpt (emphasis DSC):

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect — from forbes.com by Nisha Talagala

5 trends for AI in 2021

 

How One State Managed to Actually Write Rules on Facial Recognition — from nytimes.com by Kashmir Hill
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.

 

Timnit Gebru’s Exit From Google Exposes a Crisis in AI — from wired.com by Alex Hanna and Meredith Whittaker
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

Excerpt:

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

 

Artificial intelligence will go mainstream in 2021 — from manilatimes.net by Noemi Lardizabal-Dado; with thanks to Matthew Lamons for this resource

Excerpt:

In his December 21 Forbes website article, titled “Why Covid Will Make AI Go Mainstream In 2021,” data scientist Ganes Kesari predicts AI will transform 2021 by accelerating pharmaceutical drug discovery beyond Covid-19. He says the face of telecommuting would change, and that AI would transform edge computing and make devices around us truly intelligent.

Artificial Intelligence in 2021: Endless Opportunities and Growth — from analyticsinsight.net by Priya Dialani; with thanks to Matthew Lamons for this resource

Excerpts:

In 2021, the grittiest of organizations will push AI to new boondocks, for example, holographic meetings for telecommunication  and on-demand, personalised manufacturing. They will gamify vital planning, incorporate simulations in the meeting room and move into intelligent edge experiences.

According to Rohan Amin, the Chief Information Officer at Chase, “In 2021, we will see more refined uses of machine learning and artificial intelligence across industries, including financial services. There will be more noteworthy incorporation of AI/ML models and abilities into numerous business operations and processes to drive improved insights and better serve clients.”

From DSC:
I’m a bit more cautious when facing the growth of AI in our world, in our lives, in our society. I see some very positive applications (such as in healthcare and in education), but I’m also concerned about techs involved with facial recognition and other uses of AI that could easily become much more negative and harmful to us in the future.

 

The Hack That Shook Washington -- by Tom Krazit

The Hack That Shook Washington — from protocol.com by Tom Krazit

Excerpt:

A cybersecurity nightmare is unfolding in the nation’s capital as the fallout from one of the most brazen security breaches in recent memory continues to spread throughout several government agencies.

The internal networks belonging to no less than five government agencies, including the Defense and State Departments, were left wide open to a group of hackers believed to be working on behalf of Russia for several months this year, according to multiple reports and tacit confirmation of a breach from government officials earlier this week. This incident was especially scary because no one seems to know exactly how much data was accessed or stolen, and because those affected may never know.

The incident also highlights the vulnerability of the software supply chain, an important part of modern application development. Here’s what we know:

Addendum on 12/19/20:

Brad Smith, President of Microsoft" This is a moment of reckoning (referring to the Solar Winds cyberattack)

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 

Designed to Deceive: Do These People Look Real to You? — from nytimes.com by Kashmir Hill and Jeremy White
These people may look familiar, like ones you’ve seen on Facebook or Twitter. Or people whose product reviews you’ve read on Amazon, or dating profiles you’ve seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer. And the technology that makes them is improving at a startling pace.

Is this humility or hubris? Do we place too little value in human intelligence — or do we overrate it, assuming we are so smart that we can create things smarter still?

 
© 2024 | Daniel Christian