Employers Tiptoeing into TikTok Hiring: Beware, Attorneys Say — from by news.bloomberglaw.com by Dan Papscun and Paige Smith

Excerpt:

  • The app encourages ‘hyper superficiality’ in hiring
  • Age discrimination also a top-line concern for attorneys

 

 

Google CEO Still Insists AI Revolution Bigger Than Invention of Fire — from gizmodo.com by Matt Novak
Pichai suggests the internet and electricity are also small potatoes compared to AI.

Excerpt:

The artificial intelligence revolution is poised to be more “profound” than the invention of electricity, the internet, and even fire, according to Google CEO Sundar Pichai, who made the comments to BBC media editor Amol Rajan in a podcast interview that first went live on Sunday.

“The progress in artificial intelligence, we are still in very early stages, but I viewed it as the most profound technology that humanity will ever develop and work on, and we have to make sure we do it in a way that we can harness it to society’s benefit,” Pichai said.

“But I expect it to play a foundational role pretty much across every aspect of our lives. You know, be it health care, be it education, be it how we manufacture things and how we consume information. 

 

AI voice actors sound more human than ever —and they’re ready to hire— from technologyreview.com by Karen Hao
A new wave of startups are using deep learning to build synthetic voice actors for digital assistants, video-game characters, and corporate videos.

Excerpt:

The company blog post drips with the enthusiasm of a ’90s US infomercial. WellSaid Labs describes what clients can expect from its “eight new digital voice actors!” Tobin is “energetic and insightful.” Paige is “poised and expressive.” Ava is “polished, self-assured, and professional.”

Each one is based on a real voice actor, whose likeness (with consent) has been preserved using AI. Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.

But the rise of hyperrealistic fake voices isn’t consequence-free. Human voice actors, in particular, have been left to wonder what this means for their livelihoods.

And below are a couple of somewhat related items:

Amazon’s latest voice interoperability move undermines Google — from protocol.com by Janko Roettgers
With a new toolkit, Amazon is making it easier to build devices that run multiple voice assistants — weakening one of Google’s key arguments against licensing the Google Assistant for such scenarios.

People should be able to pick whatever assistant they prefer for any given task, simply by invoking different words, Rubenson said. “We think it’s critical that customers have choice and flexibility,” he said. “Each will have their own strengths and capabilities.”

Protocol Next Up — from protocol.com by Janko Roettgers
Defining the future of tech and entertainment with Janko Roettgers.

Voice is becoming a true developer ecosystem. Amazon now has more than 900,000 registered Alexa developers, who collectively have built over 130,000 Alexa skills. And those skills are starting to show up in more and more places: “We now have hundreds of physical products” with Alexa built in, Alexa Voice Service & Alexa Skills VP Aaron Rubenson told me this week.

 

Watch a Drone Swarm Fly Through a Fake Forest Without Crashing — from wired.com by Max Levy
Each copter doesn’t just track where the others are. It constantly predicts where they’ll go.

From DSC:
I’m not too crazy about this drone swarm…in fact, the more I thought about it, I find it quite alarming and nerve-racking. It doesn’t take much imagination to think what the militaries of the world are already doing with this kind of thing. And our son is now in the Marines. So forgive me if I’m a bit biased here…but I can’t help but wondering what the role/impact of foot soldiers will be in the next war? I hope we don’t have one. 

Anway, just because we can…

 

The Future of Social Media: Re-Humanisation and Regulation — by Gerd Leonhard

How could social media become ‘human’ again? How can we stop the disinformation, dehumanisation and dataism that has resulted from social media’s algorithmic obsessions? I foresee that the EXTERNALTIES i.e. the consequences of unmitigated growth of exponential digital technologies will become just as big as the consequences of climate change. In fact, today, the social media industry already has quite a few parallels to the oil, gas and coal business: while private make huge profits from extracting the ‘oil’ (i.e. user data), the external damage is left to society and governments to fix. This needs to change! In this keynote I make some precise suggestions as to how that could happen.

Some snapshots/excerpts:

The future of social media -- a video by Gerd Leonhard in the summer of 2021

 

 

 

 


From DSC:
Gerd brings up some solid points here. His presentation and perspectives are not only worth checking out, but they’re worth some time for us to seriously reflect on what he’s saying.

What kind of future do we want?

And for you professors, teachers, instructional designers, trainers, and presenters out there, check out *how* he delivers the content. It’s well done and very engaging.


 

From DSC:
Again, as you can see from the items below…there are various plusses and minuses regarding the use of Artificial Intelligence (AI). Some of the items below are neither positive or negative, but I found them interesting nonetheless.


How Amazon is tackling the A.I. talent crunch — from fortune.com by Jonathan Vanian

Excerpt:

“One way Amazon has adapted to the tight labor market is to require potential new programming hires to take classes in machine learning, said Bratin Saha, a vice president and general manager of machine learning services at Amazon. The company’s executives believe they can teach these developers machine learning basics over a few weeks so that they can work on more cutting-edge projects after they’re hired.”

“These are not formal college courses, and Saha said the recruits aren’t graded like they would be in school. Instead, the courses are intended to give new developers a foundation in machine learning and statistics so they can understand the theoretical underpinnings.”

Machine Learning Can Predict Rapid Kidney Function Decline — from sicklecellanemianews.com by Steve Bryson PhD; with thanks to Sam DeBrule for this resource

Excerpt:

Machine learning tools can identify sickle cell disease (SCD) patients at high risk of progressive kidney disease as early as six months in advance, a study shows.  The study, “Using machine learning to predict rapid decline of kidney function in sickle cell anemia,” was published in the journal eJHaem.

NYPD’s Sprawling Facial Recognition System Now Has More Than 15,000 Cameras — from vice.com by Todd Feathers; with thanks to Sam DeBrule for this resource
The massive camera network is concentrated in predominantly Black and brown neighborhoods, according to a new crowdsourced report.

Excerpt:

The New York City Police Department has built a sprawling facial recognition network that may include more than 15,000 surveillance cameras in Manhattan, Brooklyn, and the Bronx, according to a massive crowdsourced investigation by Amnesty International.

“This sprawling network of cameras can be used by police for invasive facial recognition and risk turning New York into an Orwellian surveillance city,” Matt Mahmoudi, an artificial intelligence and human rights researcher at Amnesty, wrote in the group’s report. “You are never anonymous. Whether you’re attending a protest, walking to a particular neighbourhood, or even just grocery shopping—your face can be tracked by facial recognition technology using imagery from thousands of camera points across New York.”

Related to that article is this one:

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras — from wired.com by Sidney Fussell
Video from the cameras is often used in facial-recognition searches. A report finds they are most common in neighborhoods with large nonwhite populations.

Excerpt:

A NEW VIDEO from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Don’t End Up on This Artificial Intelligence Hall of Shame — from wired.com by Tom Simonite
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.

Excerpt:

The AI Incident Database is hosted by Partnership on AI, a nonprofit founded by large tech companies to research the downsides of the technology. The roll of dishonor was started by Sean McGregor, who works as a machine learning engineer at voice processor startup Syntiant. He says it’s needed because AI allows machines to intervene more directly in people’s lives, but the culture of software engineering does not encourage safety.

 

Microsoft President Warns of Orwell’s 1984 ‘Coming to Pass’ in 2024 — from interestingengineering.com by Chris Young
Microsoft’s Brad Smith warned we may be caught up in a losing race with artificial intelligence.

Excerpt (emphasis DSC):

The surveillance-state dystopia portrayed in George Orwell’s 1984 could “come to pass in 2024” if governments don’t do enough to protect the public against artificial intelligence (AI), Microsoft president Brad Smith warned in an interview for the BBC’s investigative documentary series Panorama.

During the interview, Smith warned of China’s increasing AI prowess and the fact that we may be caught up in a losing race with the technology itself.

“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith stated.

From DSC:
This is a major heads up to all those in the legal/legislative realm — especially the American Bar Association (ABA) and the Bar Associations across the country! The ABA needs to realize they have to up their game and get with the incredibly fast pace of the twenty-first century. If that doesn’t occur, we and future generations will pay the price. Two thoughts come to my mind in regards to the ABA and for the law schools out there:

Step 1: Allow 100% online-based JD programs all the time, from here on out.

Step 2: Encourage massive new program development within all law schools to help future lawyers, judges, legislative reps, & others build up more emerging technology expertise & the ramifications thereof.

Google’s plan to make search more sentient — from vox.com by Rebecca Heilweil
Google announces new search features every year, but this time feels different.

Excerpt:

At the keynote speech of its I/O developer conference on Tuesday, Google revealed a suite of ways the company is moving forward with artificial intelligence. These advancements show Google increasingly trying to build AI-powered tools that seem more sentient and that are better at perceiving how humans actually communicate and think. They seem powerful, too.

Two of the biggest AI announcements from Google involve natural language processing and search.

Google also revealed a number of AI-powered improvements to its Maps platform that are designed to yield more helpful results and directions.

Google’s plans to bring AI to education make its dominance in classrooms more alarming — from fastcompany.com by Ben Williamson
The tech giant has expressed an ambition to transform education with artificial intelligence, raising fresh ethical questions.

Struggling to Get a Job? Artificial Intelligence Could Be the Reason Why — from newsweek.com by Lydia Veljanovski; with thanks to Sam DeBrule for the resource

Excerpt:

Except that isn’t always the case. In many instances, instead of your application being tossed aside by a HR professional, it is actually artificial intelligence that is the barrier to entry. While this isn’t a problem in itself—AI can reduce workflow by rapidly filtering applicants—the issue is that within these systems lies the possibility of bias.

It is illegal in the U.S. for employers to discriminate against a job applicant because of their race, color, sex, religion, disability, national origin, age (40 or older) or genetic information. However, these AI hiring tools are often inadvertently doing just that, and there are no federal laws in the U.S. to stop this from happening.

These Indian edtech companies are shaping the future of AI & robotics — from analyticsinsight.net by Apoorva Komarraju May 25, 2021

Excerpt:

As edtech companies have taken a lead by digitizing education for the modern era, they have taken the stance to set up Atal Tinkering Labs in schools along with other services necessary for the budding ‘kidpreneurs’. With the availability of these services, students can experience 21st-century technologies like IoT, 3D printing, AI, and Robotics.

Researchers develop machine-learning model that accurately predicts diabetes, study says — from ctvnews.ca by Christy Somos

Excerpt:

TORONTO — Canadian researchers have developed a machine-learning model that accurately predicts diabetes in a population using routinely collected health data, a new study says.

The study, published in the JAMA Network Open journal, tested new machine-learning technology on routinely collected health data that examined the entire population of Ontario. The study was run by the ICES not-for-profit data research institute.

Using linked administrative health data from Ontario from 2006 to 2016, researchers created a validated algorithm by training the model on information taken from nearly 1.7 million patients.

Project Guideline: Enabling Those with Low Vision to Run Independently — from ai.googleblog.com by Xuan Yang; with thanks to Sam DeBrule for the resource

Excerpt:

For the 285 million people around the world living with blindness or low vision, exercising independently can be challenging. Earlier this year, we announced Project Guideline, an early-stage research project, developed in partnership with Guiding Eyes for the Blind, that uses machine learning to guide runners through a variety of environments that have been marked with a painted line. Using only a phone running Guideline technology and a pair of headphones, Guiding Eyes for the Blind CEO Thomas Panek was able to run independently for the first time in decades and complete an unassisted 5K in New York City’s Central Park.

Deepfake Maps Could Really Mess With Your Sense of the World — from wired.com by Will Knight
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.

In a paper published last month, researchers altered satellite images to show buildings in Seattle where there are none.

 

A couple of items from the May 2021 Inavate edition:

A very sharp learning space right here!

Sharp learning space full of AV equipment at Southampton University, UK

 

 

This Researcher Says AI Is Neither Artificial nor Intelligent — from wired.com by Tom Simonite
Kate Crawford, who holds positions at USC and Microsoft, says in a new book that even experts working on the technology misunderstand AI.

 

Shhhh, they’re listening: Inside the coming voice-profiling revolution — from fastcompany.com by Josephy Turow
Marketers are on the verge of using AI-powered technology to make decisions about who you are and what you want based purely on the sound of your voice.

Excerpt:

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of U.S. and EU laws applying to biometric surveillance. I analyzed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it.

It soon became clear to me that we’re in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing.

From DSC:
Hhhhmmm….

 

This is an abstract picture of a person's head made of connections peering sideways -- it links to Artificial intelligence and the future of national security from ASU

Artificial intelligence and the future of national security — from news.asu.edu

Excerpt:

Artificial intelligence is a “world-altering” technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

Also see:

EU Set to Ban Surveillance, Start Fines Under New AI Rules — from bloomberg.com by Natalia Drozdiak

Excerpt:

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.

Also see:

Wrongfully arrested man sues Detroit police over false facial recognition match — from washingtonpost.com by Drew Harwell
The case could fuel criticism of police investigators’ use of a controversial technology that has been shown to perform worse on people of color

Excerpts:

A Michigan man has sued Detroit police after he was wrongfully arrested and falsely identified as a shoplifting suspect by the department’s facial recognition software in one of the first lawsuits of its kind to call into question the controversial technology’s risk of throwing innocent people in jail.

Robert Williams, a 43-year-old father in the Detroit suburb of Farmington Hills, was arrested last year on charges he’d taken watches from a Shinola store after police investigators used a facial recognition search of the store’s surveillance-camera footage that identified him as the thief.

Prosecutors dropped the case less than two weeks later, arguing that officers had relied on insufficient evidence. Police Chief James Craig later apologized for what he called “shoddy” investigative work. Williams, who said he had been driving home from work when the 2018 theft had occurred, was interrogated by detectives and held in custody for 30 hours before his release.

Williams’s attorneys did not make him available for comment Tuesday. But Williams wrote in The Washington Post last year that the episode had left him deeply shaken, in part because his young daughters had watched him get handcuffed in his driveway and put into a police car after returning home from work.

“How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” he wrote. “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.”

Addendum on 4/20/21:

 

Pro:

AI-powered chatbots automate IT help at Dartmouth — from edscoop.com by Ryan Johnston

Excerpt:

To prevent a backlog of IT requests and consultations during the coronavirus pandemic, Dartmouth College has started relying on AI-powered chatbots to act as an online service desk for students and faculty alike, the school said Wednesday.

Since last fall, the Hanover, New Hampshire, university’s roughly 6,600 students and 900 faculty have been able to consult with “Dart” — the virtual assistant’s name — to ask IT or service-related questions related to the school’s technology. More than 70% of the time, their question is resolved by the chatbot, said Muddu Sudhakar, the co-founder and CEO of Aisera, the company behind the software.

Con:

The Foundations of AI Are Riddled With Errors — from wired.com by Will Knight
The labels attached to images used to train machine-vision systems are often wrong. That could mean bad decisions by self-driving cars and medical algorithms.

Excerpt:

“What this work is telling the world is that you need to clean the errors out,”says Curtis Northcutt, a PhD student at MIT who led the new work.“Otherwise the models that you think are the best for your real-world business problem could actually be wrong.”

 

 

The metaverse: real world laws give rise to virtual world problems — from cityam.com by Gregor Pryor

Legal questions
Like many technological advances, from the birth of the internet to more modern-day phenomena such as the use of big data and artificial intelligence (AI), the metaverse will in some way challenge the legal status quo.

Whilst the growth and adoption of the metaverse will raise age-old legal questions, it will also generate a number of unique legal and regulatory obstacles that need to be overcome.

From DSC:
I’m posting this because this is another example of why we have to pick up the pace within the legal realm. Organizations like the American Bar Association (ABA) are going to have to pick up the pace big time. Society has been being impacted by a variety of emerging technologies such as these. And such changes are far from being over. Law schools need to assess their roles and responsibilities in this new world as well.

Addendum on 3/29/21:
Below are some more examples from Jason Tashea’s “The Justice Tech Download” e-newsletter:

  • Florida prisons buy up location data from data brokers. (Techdirt) A prison mail surveillance company keeps tabs on those on the outside, too. (VICE)
  • Police reform requires regulating surveillance tech. (Patch) (h/t Rebecca Williams) A police camera that never tires stirs unease at the US First Circuit Court of Appeals. (Courthouse News)
  • A Florida sheriff’s office was sued for using its predictive policing program to harass residents. (Techdirt)
  • A map of e-carceration in the US. (Media Justice) (h/t Upturn)
  • This is what happens when ICE asks Google for your user information. (Los Angeles Times)
  • Data shows the NYPD seized 55,000 phones in 2020, and it returned less than 35,000 of them. (Techdirt)
  • The SAFE TECH Act will make the internet less safe for sex workers. (OneZero)
  • A New York lawmaker wants to ban the use of armed robots by police. (Wired)
  • A look at the first wave of government accountability of algorithms. (AI Now Institute) The algorithmic auditing trap. (OneZero)
  • The (im)possibility of fairness: Different value systems require different mechanisms for fair decision making. (Association for Computing Machinery)
  • A new open dataset has 510 commercial legal contracts with 13,000+ labels. (Atticus Project)
  • JusticeText co-founder shares her experience building tech for public defenders. (Law360)

 

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 
© 2021 | Daniel Christian