From DSC:
Very disturbing that citizens had no say in this. Legislators, senators, representatives, lawyers, law schools, politicians, engineers, programmers, professors, teachers, and more…please reflect upon our current situation here. How can we help create the kind of future that we can hand down to our kids and rest well at night…knowing we did all that we could to provide a dream — and not a nightmare — for them?


The Secretive Company That Might End Privacy as We Know It — from nytimes.com by Kashmir Hill
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

 

Excerpts:

“But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year…”

Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested.

 

Indian police are using facial recognition to identify protesters in Delhi — from fastcompany.com by Kristin Toussaint

Excerpt:

At Modi’s rally on December 22, Delhi police used Automated Facial Recognition System (AFRS) software—which officials there acquired in 2018 as a tool to find and identify missing children—to screen the crowd for faces that match a database of people who have attended other protests around the city, and who officials said could be disruptive.

According to the Indian Express, Delhi police have long filmed these protest events, and the department announced Monday that officials fed that footage through AFRS. Sources told the Indian news outlet that once “identifiable faces” are extracted from that footage, a dataset will point out and retain “habitual protesters” and “rowdy elements.” That dataset was put to use at Modi’s rally to keep away “miscreants who could raise slogans or banners.”

 

From DSC:
Here in the United States…are we paying attention to today’s emerging technologies and collaboratively working to create a future dream — versus a future nightmare!?!  A vendor or organization might propose a beneficial reason to use their product or technology — and it might even meet the hype at times…but then comes along other unintended uses and consequences of that technology. For example, in the article above, what started out as a technology that was supposed to be used to find/identify missing children (a benefit) was later used to identify protesters (an unintended consequence, and a nightmare in terms of such an expanded scope of use I might add)!

Along these lines, the youth of today have every right to voice their opinions and to have a role in developing or torpedoing emerging techs. What we build and put into place now will impact their lives bigtime!

 

7 Artificial Intelligence Trends to Watch in 2020 — from interestingengineering.com by Christopher McFadden

Excerpts:

Per this article, the following trends were listed:

  1. Computer Graphics will greatly benefit from AI
  2. Deepfakes will only get better, er, worse
  3. Predictive text should get better and better
  4. Ethics will become more important as time goes by
  5. Quantum computing will supercharge AI
  6. Facial recognition will appear in more places
  7. AI will help in the optimization of production pipelines

Also, this article listed several more trends:

According to sources like The Next Web, some of the main AI trends for 2020 include:

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways.

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to:

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

Also see:

Artificial intelligence predictions for 2020: 16 experts have their say — from verdict.co.uk by Ellen Daniel

Excerpts:

  • Organisations will build in processes and policies to prevent and address potential biases in AI
  • Deepfakes will become a serious threat to corporations
  • Candidate (and employee) care in the world of artificial intelligence
  • AI will augment humans, not replace them
  • Greater demand for AI understanding
  • Ramp up in autonomous vehicles
  • To fully take advantage of AI technologies, you’ll need to retrain your entire organisation
  • Voice technologies will infiltrate the office
  • IT will run itself while data acquires its own DNA
  • The ethics of AI
  • Health data and AI
  • AI to become an intrinsic part of robotic process automation (RPA)
  • BERT will open up a whole new world of deep learning use cases

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of what’s in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

 

 

Don’t trust AI until we build systems that earn trust — from economist.com
Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

Excerpts:

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions.

As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future.

Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other field. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.

The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. It’s fine if autotagging people in photos turns out to be only 90 percent reliable—if it is just about personal photos that people are posting to Instagram—but it better be much more reliable when the police start using it to find suspects in surveillance photos.

 

2019 AI report tracks profound growth — from ide.mit.edu by Paula Klein

Excerpt:

Until now “we’ve been sorely lacking good data about basic questions like ‘How is the technology advancing’ and ‘What is the economic impact of AI?’ ” Brynjolfsson said. The new index, which tracks three times as many data sets as last year’s report, goes a long way toward providing answers.

  1. Education
  • At the graduate level, AI has rapidly become the most popular specialization among computer science PhD students in North America. In 2018, over 21% of graduating Computer Science PhDs specialize in Artificial Intelligence/Machine Learning.
  • Industry is the largest consumer of AI talent. In 2018, over 60% of AI PhD graduates went to industry, up from 20% in 2004.
  • In the U.S., AI faculty leaving academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

In the U.S., #AI faculty leaving #academia for industry continues to accelerate, with over 40 departures in 2018, up from 15 in 2012 and none in 2004.

 

The future of law and computational technologies: Two sides of the same coin — from law.mit.edu by Daniel Linna
Law and computation are often thought of as being two distinct fields. Increasingly, that is not the case. Dan Linna explores the ways a computational approach could help address some of the biggest challenges facing the legal industry.

Excerpt:

The rapid advancement of artificial intelligence (“AI”) introduces opportunities to improve legal processes and facilitate social progress. At the same time, AI presents an original set of inherent risks and potential harms. From a Law and Computational Technologies perspective, these circumstances can be broadly separated into two categories. First, we can consider the ethics, regulations, and laws that apply to technology. Second, we can consider the use of technology to improve the delivery of legal services, justice systems, and the law itself. Each category presents an unprecedented opportunity to use significant technological advancements to preserve and expand the rule of law.

For basic legal needs, access to legal services might come in the form of smartphones or other devices that are capable of providing users with an inventory of their legal rights and obligations, as well as providing insights and solutions to common legal problems. Better yet, AI and pattern matching technologies can help catalyze the development of proactive approaches to identify potential legal problems and prevent them from arising, or at least mitigate their risk.

We risk squandering abundant opportunities to improve society with computational technologies if we fail to proactively create frameworks to embed ethics, regulation, and law into our processes by design and default.

To move forward, technologists and lawyers must radically expand current notions of interdisciplinary collaboration. Lawyers must learn about technology, and technologists must learn about the law.

 

 

Considering AI in hiring? As its use grows, so do the legal implications for employers. — from forbes.com by Alonzo Martinez; with thanks to Paul Czarapata for his posting on Twitter on this

Excerpt:

As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.

Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.

But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.

 

Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle

Excerpts (emphasis DSC):

Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

 

From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.

By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?

 

Also see:

  • Opinion: AI for good is often bad — from wired.com by Mark Latonero
    Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

 

From DSC:
I wish that more faculty members would share their research, teaching methods, knowledge, and commentary with the world as this professor does (vs. talking to other professors behind publishers’ walled off content). In this case, Arvind happens to use Twitter. But if one doesn’t like to use Twitter, there’s also LinkedIn, WordPress/blogging, podcasting, and other outlets. 

 

 

 

Emerging Tech Trend: Patient-Generated Health Data — from futuretodayinstitute.com — Newsletter Issue 124

Excerpt:

Near-Futures Scenarios (2023 – 2028):

Pragmatic: Big tech continues to develop apps that are either indispensably convenient, irresistibly addictive, or both, and we pay for them, not with cash, but with the data we (sometimes unwittingly) let the apps capture. But for the apps for health care and medical insurance, the stakes could literally be life-and-death. Consumers receive discounted premiums, co-pays, diagnostics and prescription fulfillment, but the data we give up in exchange leaves them more vulnerable to manipulation and invasion of privacy.

Catastrophic: Profit-driven drug makers exploit private health profiles and begin working with the Big Nine. They use data-based targeting to over prescribe patients, netting themselves billions of dollars. Big Pharma target and prey on people’s addictions, mental health predispositions and more, which, while undetectable on an individual level, take a widespread societal toll.

Optimistic: Health data enables prescient preventative care. A.I. discerns patterns within gargantuan data sets that are otherwise virtually undetectable to humans. Accurate predictive algorithms identifies complex combinations of risk factors for cancer or Parkinson’s, offers early screening and testing to high-risk patients and encourages lifestyle shifts or treatments to eliminate or delay the onset of serious diseases. A.I. and health data creates a utopia of public health. We happily relinquish our privacy for a greater societal good.

Watchlist: Amazon; Manulife Financial; GE Healthcare; Meditech; Allscripts; eClinicalWorks; Cerner; Validic; HumanAPI; Vivify; Apple; IBM; Microsoft; Qualcomm; Google; Medicare; Medicaid; national health systems; insurance companies.

 

A face-scanning algorithm increasingly decides whether you deserve the job — from washingtonpost.com by Drew Harwell
HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

Excerpt:

An artificial intelligence hiring system has become a powerful gatekeeper for some of America’s most prominent employers, reshaping how companies assess their workforce — and how prospective employees prove their worth.

Designed by the recruiting-technology firm HireVue, the system uses candidates’ computer or cellphone cameras to analyze their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score.

 

The system, they argue, will assume a critical role in helping decide a person’s career. But they doubt it even knows what it’s looking for: Just what does the perfect employee look and sound like, anyway?

“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York.

 

From DSC:
If you haven’t been screened out by an algorithm from an Applicant Tracking System recently, then you haven’t been looking for a job in the last few years. If that’s the case:

  • Then you might not be very interested in this posting.
  • You will be very surprised in the future, when you do need to search for a new job.

Because the truth is, it’s very difficult to get the eyes of a human being to even look at your resume and/or to meet you in person. The above posting/article should disturb you even more. I don’t think that the programmers have programmed everything inside an experienced HR professional’s mind.

 

Also see:

  • In case after case, courts reshape the rules around AI — from muckrock.com
    AI Now Institute recommends improvements and highlights key AI litigation
    Excerpt:
    When undercover officers with the Jacksonville Sheriff’s Office bought crack cocaine from someone in 2015, they couldn’t actually identify the seller. Less than a year later, though, Willie Allen Lynch was sentenced to 8 years in prison, picked through a facial recognition system. He’s still fighting in court over how the technology was used, and his case and others like it could ultimately shape the use of algorithms going forward, according to a new report.
 

Deepfakes: When a picture is worth nothing at all — from law.com by Katherine Forrest

Excerpt:

“Deepfakes” is the name for highly realistic, falsified imagery and sound recordings; they are digitized and personalized impersonations. Deepfakes are made by using AI-based facial and audio recognition and reconstruction technology; AI algorithms are used to predict facial movements as well as vocal sounds. In her Artificial Intelligence column, Katherine B. Forrest explores the legal issues likely to arise as deepfakes become more prevalent.

 

YouTube’s algorithm hacked a human vulnerability, setting a dangerous precedent — from which-50.com by Andrew Birmingham

Excerpt (emphasis DSC):

Even as YouTube’s recommendation algorithm was rolled out with great fanfare, the fuse was already burning. A project of The Google Brain and designed to optimise engagement, it did something unforeseen — and potentially dangerous.

Today, we are all living with the consequences.

As Zeynep Tufekci, an associate professor at the University of North Carolina, explained to attendees of Hitachi Vantara’s Next 2019 conference in Las Vegas this week, “What the developers did not understand at the time is that YouTube’ algorithm had discovered a human vulnerability. And it was using this [vulnerability] at scale to increase YouTube’s engagement time — without a single engineer thinking, ‘is this what we should be doing?’”

 

The consequence of the vulnerability — a natural human tendency to engage with edgier ideas — led to YouTube’s users being exposed to increasingly extreme content, irrespective of their preferred areas of interest.

“What they had done was use machine learning to increase watch time. But what the machine learning system had done was to discover a human vulnerability. And that human vulnerability is that things that are slightly edgier are more attractive and more interesting.”

 

From DSC:
Just because we can…

 

 

Can you make AI fairer than a judge? Play our courtroom algorithm game — from technologyreview.com by Karen Hao and Jonathan Stray
Play our courtroom algorithm game The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem.

Excerpt:

As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.

But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them—software engineers—are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness? 

This story attempts to offer an answer. And to do so, we need your help. We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair. (Don’t worry—this won’t involve looking at code!)

The algorithm we’re examining is known as COMPAS, and it’s one of several different “risk assessment” tools used in the US criminal legal system.

 

But whether algorithms should be used to arbitrate fairness in the first place is a complicated question. Machine-learning algorithms are trained on “data produced through histories of exclusion and discrimination,” writes Ruha Benjamin, an associate professor at Princeton University, in her book Race After Technology. Risk assessment tools are no different. The greater question about using them—or any algorithms used to rank people—is whether they reduce existing inequities or make them worse.

 

You can also see change in these articles as well:

 

 
© 2024 | Daniel Christian