An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft — from washingtonpost.com by Drew Harwell

Excerpt:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

 

From DSC:
Needless to say, this is very scary stuff here! Now what…? Who in our society should get involved to thwart this kind of thing?

  • Programmers?
  • Digital audio specialists?
  • Legislators?
  • Lawyers?
  • The FBI?
  • Police?
  • Other?


Addendum on 9/12/19:

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

Autonomous robot deliveries are coming to 100 university campuses in the U.S. — from digitaltrends.com by Luke Dormehl

Excerpt:

Pioneering autonomous delivery robot company Starship Technologies is coming to a whole lot more university campuses around the U.S. The robotics startup announced that it will expand its delivery services to 100 university campuses in the next 24 months, building on its successful fleets at George Mason University and Northern Arizona University.

 

Postmates Gets Go-Ahead to Test Delivery Robot in San Francisco — from interestingengineering.com by Donna Fuscaldo
Postmates was granted permission to test a delivery robot in San Francisco.

 

And add those to ones formerly posted on Learning Ecosystems:

 

From DSC:
I’m grateful for John Muir and for the presidents of the United States who had the vision to set aside land for the national park system. Such parks are precious and provide much needed respite from the hectic pace of everyday life.

Closer to home, I’m grateful for what my parents’ vision was for a place to help bring the families together through the years. A place that’s peaceful, quiet, surrounded by nature and community.

So I wonder what kind of legacy the current generations are beginning to create? That is…do we really want to be known as the generations who created the unchecked chaotic armies of delivery drones, delivery robots, driverless pods, etc. to fill the skies, streets, sidewalks, and more? 

I don’t. That’s not a gift to our kids or grandkids…not at all.

 

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

A handful of US cities have banned government use of facial recognition technology due to concerns over its accuracy and privacy. WIRED’s Tom Simonite talks with computer vision scientist and lawyer Gretchen Greene about the controversy surrounding the use of this technology.

 

 

Report: Smart-city IoT isn’t smart enough yet — from networkworld.com by Jon Gold
A report from Forrester Research details vulnerabilities affecting smart-city internet of things (IoT) infrastructure and offers some methods of mitigation.

 

Governments take first, tentative steps at regulating AI — from heraldnet.com by James McCusker
Can we control artificial intelligence’s potential for disrupting markets? Time will tell.

Excerpt:

State legislatures in New York and New Jersey have proposed legislation that represents the first, tentative steps at regulation. While the two proposed laws are different, they both have elements of information gathering about the risks to such things as privacy, security and economic fairness.

 

 

You’re already being watched by facial recognition tech. This map shows where — from fastcompany.com by Katharine Schwab
Digital rights nonprofit Fight for the Future has mapped out the physical footprint of the controversial technology, which is in use in cities across the country.

 

 

The coming deepfakes threat to businesses — from axios.com by Kaveh Waddell and Jennifer Kingson

Excerpt:

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company’s stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company’s share price would crumple.

What’s happening: For all the talk about fake videos, it’s deepfake audio that has emerged as the first real threat to the private sector.

 

From DSC…along these same lines see:

 

I opted out of facial recognition at the airport — it wasn’t easy — from wired.com by Allie Funk

Excerpt (emphasis DSC):

As a privacy-conscious person, I was uncomfortable boarding this way. I also knew I could opt out. Presumably, most of my fellow fliers did not: I didn’t hear a single announcement alerting passengers how to avoid the face scanners.

As I watched traveler after traveler stand in front of a facial scanner before boarding our flight, I had an eerie vision of a new privacy-invasive status quo. With our faces becoming yet another form of data to be collected, stored, and used, it seems we’re sleepwalking toward a hyper-surveilled environment, mollified by assurances that the process is undertaken in the name of security and convenience. I began to wonder: Will we only wake up once we no longer have the choice to opt out?

Until we have evidence that facial recognition is accurate and reliable—as opposed to simply convenient—travelers should avoid the technology where they can.

 

To figure out how to do so, I had to leave the boarding line, speak with a Delta representative at their information desk, get back in line, then request a passport scan when it was my turn to board. 

 

From DSC:
Readers of this blog will know that I am generally a pro-technology person. That said, there are times when I don’t trust humankind to use the power of some of these emerging technologies appropriately and ethically. Along these lines, I don’t like where facial recognition could be heading…and citizens don’t seem to have effective ways to quickly weigh in on this emerging technology. I find this to be a very troubling situation. How about you?

 

Daniel Christian -- A technology is meant to be a tool, it is not meant to rule.

 

 

From DSC:
A couple of somewhat scary excerpts from Meet Hemingway: The Artificial Intelligence Robot That Can Copy Your Handwriting (from forbes.com by Bernard Marr):

The Handwriting Company now has a robot that can create beautifully handwritten communication that mimics the style of an individual’s handwriting while a robot from Brown University can replicate handwriting from a variety of languages even though it was just trained on Japanese characters.

Hemingway is The Handwriting Company’s robot that can mimic anyone’s style of handwriting. All that Hemingway’s algorithm needs to mimic an individual’s handwriting is a sample of handwriting from that person.

 

From DSC:
So now there are folks out there that can generate realistic “fakes” using videos, handwriting, audio and more. Super. Without technologies to determine such fakes, things could get ugly…especially as we approach a presidential election next year. I’m trying not to be negative, but it’s hard when the existence of fakes is a serious topic and problem these days.

 

Addendum on 7/5/19:
AI poised to ruin Internet using “massive tsunami” of fake news — from futurism.com

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning… we feel it is an incredibly important topic with far too little discussion currently,” Tynski told The Verge.

 

From DSC:
Are you kidding me!? Geo-fencing technology or not, I don’t trust this for one second.


Amazon patents ‘surveillance as a service’ tech for its delivery drones — from theverge.com by Jon Porter
Including technology that cuts out footage of your neighbor’s house

Excerpt:

The patent gives a few hints how the surveillance service could work. It says customers would be allowed to pay for visits on an hourly, daily, or weekly basis, and that drones could be equipped with night vision cameras and microphones to expand their sensing capabilities.

 

From DSC:
I just ran across this recently…what do you think of it?!

 

 

From DSC:
For me, this is extremely disturbing. And if I were a betting man, I’d wager that numerous nations/governments around the world — most certainly that includes the U.S. — have been developing new weapons of warfare for years that are based on artificial intelligence, robotics, automation, etc.

The question is, now what do we do?

Some very hard questions that numerous engineers and programmers need to be asking themselves these days…

By the way, the background audio on the clip above should either be non-existent or far more ominous — this stuff is NOT a joke.

Also see this recent posting. >>

 

Addendum on 6/26/19:

 

Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop” — that is, with no person involved at any point between identifying a target and killing them. And as facial recognition and decision-making algorithms become more powerful, it will only get easier.

 

 

Russian hackers behind ‘world’s most murderous malware’ probing U.S. power grid — from digitaltrends.com Georgina Torbet

 

U.S. Escalates Online Attacks on Russia’s Power Grid — from nytimes.com by David Sanger and Nicole Perlroth

 

 

 

 

 
© 2024 | Daniel Christian