Pro:

AI-powered chatbots automate IT help at Dartmouth — from edscoop.com by Ryan Johnston

Excerpt:

To prevent a backlog of IT requests and consultations during the coronavirus pandemic, Dartmouth College has started relying on AI-powered chatbots to act as an online service desk for students and faculty alike, the school said Wednesday.

Since last fall, the Hanover, New Hampshire, university’s roughly 6,600 students and 900 faculty have been able to consult with “Dart” — the virtual assistant’s name — to ask IT or service-related questions related to the school’s technology. More than 70% of the time, their question is resolved by the chatbot, said Muddu Sudhakar, the co-founder and CEO of Aisera, the company behind the software.

Con:

The Foundations of AI Are Riddled With Errors — from wired.com by Will Knight
The labels attached to images used to train machine-vision systems are often wrong. That could mean bad decisions by self-driving cars and medical algorithms.

Excerpt:

“What this work is telling the world is that you need to clean the errors out,”says Curtis Northcutt, a PhD student at MIT who led the new work.“Otherwise the models that you think are the best for your real-world business problem could actually be wrong.”

 

 

The metaverse: real world laws give rise to virtual world problems — from cityam.com by Gregor Pryor

Legal questions
Like many technological advances, from the birth of the internet to more modern-day phenomena such as the use of big data and artificial intelligence (AI), the metaverse will in some way challenge the legal status quo.

Whilst the growth and adoption of the metaverse will raise age-old legal questions, it will also generate a number of unique legal and regulatory obstacles that need to be overcome.

From DSC:
I’m posting this because this is another example of why we have to pick up the pace within the legal realm. Organizations like the American Bar Association (ABA) are going to have to pick up the pace big time. Society has been being impacted by a variety of emerging technologies such as these. And such changes are far from being over. Law schools need to assess their roles and responsibilities in this new world as well.

Addendum on 3/29/21:
Below are some more examples from Jason Tashea’s “The Justice Tech Download” e-newsletter:

  • Florida prisons buy up location data from data brokers. (Techdirt) A prison mail surveillance company keeps tabs on those on the outside, too. (VICE)
  • Police reform requires regulating surveillance tech. (Patch) (h/t Rebecca Williams) A police camera that never tires stirs unease at the US First Circuit Court of Appeals. (Courthouse News)
  • A Florida sheriff’s office was sued for using its predictive policing program to harass residents. (Techdirt)
  • A map of e-carceration in the US. (Media Justice) (h/t Upturn)
  • This is what happens when ICE asks Google for your user information. (Los Angeles Times)
  • Data shows the NYPD seized 55,000 phones in 2020, and it returned less than 35,000 of them. (Techdirt)
  • The SAFE TECH Act will make the internet less safe for sex workers. (OneZero)
  • A New York lawmaker wants to ban the use of armed robots by police. (Wired)
  • A look at the first wave of government accountability of algorithms. (AI Now Institute) The algorithmic auditing trap. (OneZero)
  • The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making. (Association for Computing Machinery)
  • A new open dataset has 510 commercial legal contracts with 13,000+ labels. (Atticus Project)
  • JusticeText co-founder shares her experience building tech for public defenders. (Law360)

 

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 

How Facebook got addicted to spreading misinformation — from technologyreview.com by Karen Hao

Excerpt (emphasis DSC):

By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.

The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect — from forbes.com by Nisha Talagala

5 trends for AI in 2021

 

How One State Managed to Actually Write Rules on Facial Recognition — from nytimes.com by Kashmir Hill
Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.

 

Timnit Gebru’s Exit From Google Exposes a Crisis in AI — from wired.com by Alex Hanna and Meredith Whittaker
The situation has made clear that the field needs to change. Here’s where to start, according to a current and a former Googler.

Excerpt:

It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in the field of artificial intelligence. She is also one of the few Black women in AI research and an unflinching advocate for bringing more BIPOC, women, and non-Western people into the field. By any measure, she excelled at the job Google hired her to perform, including demonstrating racial and gender disparities in facial-analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal advocacy for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after demanding that she and her colleagues withdraw a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite the fact that she hadn’t resigned. (Google declined to comment for this story.)

 

Artificial intelligence will go mainstream in 2021 — from manilatimes.net by Noemi Lardizabal-Dado; with thanks to Matthew Lamons for this resource

Excerpt:

In his December 21 Forbes website article, titled “Why Covid Will Make AI Go Mainstream In 2021,” data scientist Ganes Kesari predicts AI will transform 2021 by accelerating pharmaceutical drug discovery beyond Covid-19. He says the face of telecommuting would change, and that AI would transform edge computing and make devices around us truly intelligent.

Artificial Intelligence in 2021: Endless Opportunities and Growth — from analyticsinsight.net by Priya Dialani; with thanks to Matthew Lamons for this resource

Excerpts:

In 2021, the grittiest of organizations will push AI to new boondocks, for example, holographic meetings for telecommunication  and on-demand, personalised manufacturing. They will gamify vital planning, incorporate simulations in the meeting room and move into intelligent edge experiences.

According to Rohan Amin, the Chief Information Officer at Chase, “In 2021, we will see more refined uses of machine learning and artificial intelligence across industries, including financial services. There will be more noteworthy incorporation of AI/ML models and abilities into numerous business operations and processes to drive improved insights and better serve clients.”

From DSC:
I’m a bit more cautious when facing the growth of AI in our world, in our lives, in our society. I see some very positive applications (such as in healthcare and in education), but I’m also concerned about techs involved with facial recognition and other uses of AI that could easily become much more negative and harmful to us in the future.

 

The Hack That Shook Washington -- by Tom Krazit

The Hack That Shook Washington — from protocol.com by Tom Krazit

Excerpt:

A cybersecurity nightmare is unfolding in the nation’s capital as the fallout from one of the most brazen security breaches in recent memory continues to spread throughout several government agencies.

The internal networks belonging to no less than five government agencies, including the Defense and State Departments, were left wide open to a group of hackers believed to be working on behalf of Russia for several months this year, according to multiple reports and tacit confirmation of a breach from government officials earlier this week. This incident was especially scary because no one seems to know exactly how much data was accessed or stolen, and because those affected may never know.

The incident also highlights the vulnerability of the software supply chain, an important part of modern application development. Here’s what we know:

Addendum on 12/19/20:

Brad Smith, President of Microsoft" This is a moment of reckoning (referring to the Solar Winds cyberattack)

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 

Designed to Deceive: Do These People Look Real to You? — from nytimes.com by Kashmir Hill and Jeremy White
These people may look familiar, like ones you’ve seen on Facebook or Twitter. Or people whose product reviews you’ve read on Amazon, or dating profiles you’ve seen on Tinder. They look stunningly real at first glance. But they do not exist. They were born from the mind of a computer. And the technology that makes them is improving at a startling pace.

Is this humility or hubris? Do we place too little value in human intelligence — or do we overrate it, assuming we are so smart that we can create things smarter still?

 

Report: There’s More to Come for AI in Ed — from thejournal.com by Dian Schaffhauser

Excerpts:

The group came up with dozens of “opportunities” for AI in education, from extending what teachers can do to better understanding human learning:

  • Using virtual instructors to free up “personalization time” for classroom teachers;
  • Offloading the “cognitive load” of teaching;
  • Providing “job aids” for teachers;
  • Identifying the links between courses, credentials, degrees and skills;
  • “Revolutionizing” testing and assessment;
  • Creating new kinds of “systems of support”;
  • Helping with development of “teaching expertise”; and
  • Better understanding human learning through “modeling and building interfaces” in AI.

But contributors also offered just as many barriers to success:

  • Differences in the way teachers teach would require “different job aids”;
  • Teachers would fear losing their jobs;
  • Data privacy concerns;
  • Bias worries;
  • Dealing with unrealistic expectations and fears about AI pushed in “popular culture”;
  • Lack of diversity in gender, ethnicity and culture in AI projects; and
  • Smart use of data would require more teacher training.
 

From DSC:
The good…

London A.I. lab claims breakthrough that could accelerate drug discovery — from nytimes.com by
Researchers at DeepMind say they have solved “the protein folding problem,” a task that has bedeviled scientists for more than 50 years

This long-sought breakthrough could accelerate the ability to understand diseases, develop new medicines and unlock mysteries of the human body.

…and the not so good…

 

From DSC:
Who needs to be discussing/debating “The Social Dilemma” movie? Whether one agrees with the perspectives put forth therein or not, the discussion boards out there should be lighting up in the undergraduate areas of Computer Science (especially Programming), Engineering, Business, Economics, Mathematics, Statistics, Philosophy, Religion, Political Science, Sociology, and perhaps other disciplines as well. 

To those starting out the relevant careers here…just because we can, doesn’t mean we should. Ask yourself not whether something CAN be developed, but *whether it SHOULD be developed* and what the potential implications of a technology/invention/etc. might be. I’m not aiming to take a position here. Rather, I’m trying to promote some serious reflection for those developing our new, emerging technologies and our new products/services out there.

Who needs to be discussing/debating The Social Dilemna movie?

 

 

Facial Recognition Start-Up Mounts a First Amendment Defense — from nytimes.com by Kashmir Hill
Clearview AI has hired Floyd Abrams, a top lawyer, to help fight claims that selling its data to law enforcement agencies violates privacy laws.

Excerpts:

Litigation against the start-up “has the potential of leading to a major decision about the interrelationship between privacy claims and First Amendment defenses in the 21st century,” Mr. Abrams said in a phone interview. He said the underlying legal questions could one day reach the Supreme Court.

Clearview AI has scraped billions of photos from the internet, including from platforms like LinkedIn and Instagram, and sells access to the resulting database to law enforcement agencies. When an officer uploads a photo or a video image containing a person’s face, the app tries to match the likeness and provides other photos of that person that can be found online.

From DSC:
Many, if not all of us, are now required to be lifelong learners in order to stay marketable. I was struck by that when I read the following excerpt from the above article:

“I’m learning the language,” Mr. Abrams said. “I’ve never used the words ‘facial biometric algorithms’ until this phone call.”

 
© 2021 | Daniel Christian