Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

From DSC:
As with many emerging technologies, there appear to be some significant pros and cons re: the use of NFTs (Non-Fungible Tokens)

The question I wonder about is: How can the legal realm help address the massive impacts of the exponential pace of technological change in our society these days? For examples:

Technicians, network engineers, data center specialists, computer scientists, and others also need to be asking themselves how they can help out in these areas as well.

Emphasis below is mine.


NFTs Are Hot. So Is Their Effect on the Earth’s Climate — from wired.com by Gregory Barber
The sale of a piece of crypto art consumed as much energy as the studio uses in two years. Now the artist is campaigning to reduce the medium’s carbon emissions.

Excerpt:

The works were placed for auction on a website called Nifty Gateway, where they sold out in 10 seconds for thousands of dollars. The sale also consumed 8.7 megawatt-hours of energy, as he later learned from a website called Cryptoart.WTF.

NFTs And Their Role In The “Metaverse” — from 101blockchains.com by Georgia Weston

Many people would perceive NFTs as mere images of digital artworks or collectibles which they can sell for massive prices. However, the frenzy surrounding digital art in present times has pointed out many new possibilities with NFTs. For example, the NFT metaverse connection undoubtedly presents a promising use case for NFTs. The road for the future of NFTs brings many new opportunities for investors, enterprises, and hobbyists, which can shape up NFT usage and adoption in the long term. 

NFTs or non-fungible tokens are a new class of digital assets, which are unique, indivisible, and immutable. They help in representing the ownership of digital and physical assets on the blockchain. Starting from digital artwork to the gaming industry, NFTs are making a huge impact everywhere.

The decentralized nature of the blockchain offers the prospects for unlimited business opportunities and social interaction. Metaverse offers extremely versatile, scalable, and interoperable digital environments. Most important of all, the metaverse blends innovative technologies with models of interaction between participants from individual and enterprise perspectives. 

From DSC:
How might the developments occurring with NFTs and the Metaverse impact a next-gen learning platform?

—–

Artist shuts down because people keep their work to make NFTs — from futurism.com by Victor Tangermann
NFT theft is a huge problem

Someone is selling NFTs of Olive Garden locations that they do not own — from futurism.com by
And you can mint a breadstick NFT — for free, of course

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 

Surveillance in Schools Associated With Negative Student Outcomes — from techlearning.com by Erik Ofgang
Surveillance at schools is meant to keep students safe but sometimes it can make them feel like suspects instead.

Excerpt:

“We found that schools that rely heavily on metal detectors, random book bag searches, school resource officers, and other methods of surveillance had a negative impact relative to those schools who relied on those technologies least,” says Odis Johnson Jr., the lead author of the study and the Bloomberg Distinguished Professor of Social Policy & STEM Equity at Johns Hopkins.

The researchers also found that Black students are four times more likely to attend a high- versus low-surveillance school, and students who attend high-surveillance schools are more likely to be poor.

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

 

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo — from forbes.com by Annie Brown

Excerpt:

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offering a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; “The possibilities are endless, but the burden of care is very heavy[…]we must act and evolve with [caution].”

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 

The Fight to Define When AI Is ‘High Risk’ — from wired.com by Khari Johnson
Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

Excerpt:

The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

 

From DSC:
Yet another example of the need for the legislative and legal realms to try and catch up here.

The legal realm needs to try and catch up with the exponential pace of technological change

 

Many Americans aren’t aware they’re being tracked with facial recognition while shopping  — from techradar.com by Anthony Spadafora
You’re not just on camera, you’re also being tracked

Excerpt:

Despite consumer opposition to facial recognition, the technology is currently being used in retail stores throughout the US according to new research from Piplsay.

While San Francisco banned the police from using facial recognition back in 2019 and the EU called for a five year ban on the technology last year, several major retailers in the US including Lowe’s, Albertsons and Macy’s have been using it for both fraud and theft detection.

From DSC:
I’m not sure how prevalent this practice is…and that’s precisely the point. We don’t know what all of those cameras are actually doing in our stores, gas stations, supermarkets, etc. I put this in the categories of policy, law schools, legal, government, and others as the legislative and legal realm need to scramble to catch up to this Wild Wild West.

Along these lines, I was watching a portion of 60 minutes last night where they were doing a piece on autonomous trucks (reportedly to hit the roads without a person sometime later this year). When asked about oversight, there was some…but not much.

Readers of this blog will know that I have often wondered…”How does society weigh in on these things?”

Along these same lines, also see:

  • The NYPD Had a Secret Fund for Surveillance Tools — from wired.com by Sidney Fussell
    Documents reveal that police bought facial-recognition software, vans equipped with x-ray machines, and “stingray” cell site simulators—with no public oversight.
 

Employers Tiptoeing into TikTok Hiring: Beware, Attorneys Say — from by news.bloomberglaw.com by Dan Papscun and Paige Smith

Excerpt:

  • The app encourages ‘hyper superficiality’ in hiring
  • Age discrimination also a top-line concern for attorneys

 

 

Google CEO Still Insists AI Revolution Bigger Than Invention of Fire — from gizmodo.com by Matt Novak
Pichai suggests the internet and electricity are also small potatoes compared to AI.

Excerpt:

The artificial intelligence revolution is poised to be more “profound” than the invention of electricity, the internet, and even fire, according to Google CEO Sundar Pichai, who made the comments to BBC media editor Amol Rajan in a podcast interview that first went live on Sunday.

“The progress in artificial intelligence, we are still in very early stages, but I viewed it as the most profound technology that humanity will ever develop and work on, and we have to make sure we do it in a way that we can harness it to society’s benefit,” Pichai said.

“But I expect it to play a foundational role pretty much across every aspect of our lives. You know, be it health care, be it education, be it how we manufacture things and how we consume information. 

 

AI voice actors sound more human than ever —and they’re ready to hire— from technologyreview.com by Karen Hao
A new wave of startups are using deep learning to build synthetic voice actors for digital assistants, video-game characters, and corporate videos.

Excerpt:

The company blog post drips with the enthusiasm of a ’90s US infomercial. WellSaid Labs describes what clients can expect from its “eight new digital voice actors!” Tobin is “energetic and insightful.” Paige is “poised and expressive.” Ava is “polished, self-assured, and professional.”

Each one is based on a real voice actor, whose likeness (with consent) has been preserved using AI. Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.

But the rise of hyperrealistic fake voices isn’t consequence-free. Human voice actors, in particular, have been left to wonder what this means for their livelihoods.

And below are a couple of somewhat related items:

Amazon’s latest voice interoperability move undermines Google — from protocol.com by Janko Roettgers
With a new toolkit, Amazon is making it easier to build devices that run multiple voice assistants — weakening one of Google’s key arguments against licensing the Google Assistant for such scenarios.

People should be able to pick whatever assistant they prefer for any given task, simply by invoking different words, Rubenson said. “We think it’s critical that customers have choice and flexibility,” he said. “Each will have their own strengths and capabilities.”

Protocol Next Up — from protocol.com by Janko Roettgers
Defining the future of tech and entertainment with Janko Roettgers.

Voice is becoming a true developer ecosystem. Amazon now has more than 900,000 registered Alexa developers, who collectively have built over 130,000 Alexa skills. And those skills are starting to show up in more and more places: “We now have hundreds of physical products” with Alexa built in, Alexa Voice Service & Alexa Skills VP Aaron Rubenson told me this week.

 

Watch a Drone Swarm Fly Through a Fake Forest Without Crashing — from wired.com by Max Levy
Each copter doesn’t just track where the others are. It constantly predicts where they’ll go.

From DSC:
I’m not too crazy about this drone swarm…in fact, the more I thought about it, I find it quite alarming and nerve-racking. It doesn’t take much imagination to think what the militaries of the world are already doing with this kind of thing. And our son is now in the Marines. So forgive me if I’m a bit biased here…but I can’t help but wondering what the role/impact of foot soldiers will be in the next war? I hope we don’t have one. 

Anway, just because we can…

 
© 2021 | Daniel Christian