Is Artificial Intelligence Undermining The Legal System? — from lawyer-monthly.com
Globally recognised artificial intelligence expert Dr Lance Eliot explains how AI is undermining the legal system.

Excerpt:

Well, imagine if I told you that the text messages, the emails, and the video clips were all crafted via the use of AI-based deepfake technologies. None of that seeming “evidence” of wrongdoing or at least inappropriate actions of the prosecutor are real. They certainly look to be real. The texts use the same style of text messaging that the prosecutor normally uses. The emails have the same written style as other emails by the prosecutor.

And, the most damning of the materials, those video clips of the prosecutor, are clearly the face of the prosecutor, and the words spoken are of the same voice as the prosecutor. You might have been willing to assume that the texts and the emails could be faked, but the video seems to be the last straw on the camel’s back. This is the prosecutor caught on video saying things that are utterly untoward in this context. All of that could readily be prepared via the use of today’s AI-based deepfake high-tech.

So, be on the watch for getting AI-based deepfake materials produced about you.

 

Meet The Secretive Surveillance Wizards Helping The FBI And ICE Wiretap Facebook And Google Users — from forbes.com by Thomas Brewster
A small Nebraska company is helping law enforcement around the world spy on users of Google, Facebook and other tech giants. A secretly recorded presentation to police reveals how deeply embedded in the U.S. surveillance machine PenLink has become.

Excerpts:

PenLink might be the most pervasive wiretapper you’ve never heard of.

With $20 million revenue every year from U.S. government customers such as the Drug Enforcement Administration, the FBI, Immigration Customs Enforcement (ICE) and almost every other law enforcement agency in the federal directory, PenLink enjoys a steady stream of income. That doesn’t include its sales to local and state police, where it also does significant business but for which there are no available revenue figures. Forbes viewed contracts across the U.S., including towns and cities in California, Florida, Illinois, Hawaii, North Carolina and Nevada.

 

China Is About to Regulate AI—and the World Is Watching — from wired.com by Jennifer Conrad
Sweeping rules will cover algorithms that set prices, control search results, recommend videos, and filter content.

Excerpt:

On March 1, China will outlaw this kind of algorithmic discrimination as part of what may be the world’s most ambitious effort to regulate artificial intelligence. Under the rules, companies will be prohibited from using personal information to offer users different prices for a product or service.

The sweeping rules cover algorithms that set prices, control search results, recommend videos, and filter content. They will impose new curbs on major ride-hailing, ecommerce, streaming, and social media companies.

 

The US is testing robot patrol dogs on its borders. Should we worry? — from interestingengineering.com by Loukia Papadopoulos
Bow-wow just got darker.

However, we can’t shake the feeling that their deployment is creating a dystopian future, one we are not sure is a completely safe one especially if the dogs are trained to operate autonomously. That is a capability we are not sure the robots yet have. Will this be the next stop for border control? If so, how can we trust machines with rifles? Time will tell how this situation evolves.
 

Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

From DSC:
As with many emerging technologies, there appear to be some significant pros and cons re: the use of NFTs (Non-Fungible Tokens)

The question I wonder about is: How can the legal realm help address the massive impacts of the exponential pace of technological change in our society these days? For examples:

Technicians, network engineers, data center specialists, computer scientists, and others also need to be asking themselves how they can help out in these areas as well.

Emphasis below is mine.


NFTs Are Hot. So Is Their Effect on the Earth’s Climate — from wired.com by Gregory Barber
The sale of a piece of crypto art consumed as much energy as the studio uses in two years. Now the artist is campaigning to reduce the medium’s carbon emissions.

Excerpt:

The works were placed for auction on a website called Nifty Gateway, where they sold out in 10 seconds for thousands of dollars. The sale also consumed 8.7 megawatt-hours of energy, as he later learned from a website called Cryptoart.WTF.

NFTs And Their Role In The “Metaverse” — from 101blockchains.com by Georgia Weston

Many people would perceive NFTs as mere images of digital artworks or collectibles which they can sell for massive prices. However, the frenzy surrounding digital art in present times has pointed out many new possibilities with NFTs. For example, the NFT metaverse connection undoubtedly presents a promising use case for NFTs. The road for the future of NFTs brings many new opportunities for investors, enterprises, and hobbyists, which can shape up NFT usage and adoption in the long term. 

NFTs or non-fungible tokens are a new class of digital assets, which are unique, indivisible, and immutable. They help in representing the ownership of digital and physical assets on the blockchain. Starting from digital artwork to the gaming industry, NFTs are making a huge impact everywhere.

The decentralized nature of the blockchain offers the prospects for unlimited business opportunities and social interaction. Metaverse offers extremely versatile, scalable, and interoperable digital environments. Most important of all, the metaverse blends innovative technologies with models of interaction between participants from individual and enterprise perspectives. 

From DSC:
How might the developments occurring with NFTs and the Metaverse impact a next-gen learning platform?

—–

Artist shuts down because people keep their work to make NFTs — from futurism.com by Victor Tangermann
NFT theft is a huge problem

Someone is selling NFTs of Olive Garden locations that they do not own — from futurism.com by
And you can mint a breadstick NFT — for free, of course

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 

Surveillance in Schools Associated With Negative Student Outcomes — from techlearning.com by Erik Ofgang
Surveillance at schools is meant to keep students safe but sometimes it can make them feel like suspects instead.

Excerpt:

“We found that schools that rely heavily on metal detectors, random book bag searches, school resource officers, and other methods of surveillance had a negative impact relative to those schools who relied on those technologies least,” says Odis Johnson Jr., the lead author of the study and the Bloomberg Distinguished Professor of Social Policy & STEM Equity at Johns Hopkins.

The researchers also found that Black students are four times more likely to attend a high- versus low-surveillance school, and students who attend high-surveillance schools are more likely to be poor.

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

 

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo — from forbes.com by Annie Brown

Excerpt:

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offering a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; “The possibilities are endless, but the burden of care is very heavy[…]we must act and evolve with [caution].”

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 

The Fight to Define When AI Is ‘High Risk’ — from wired.com by Khari Johnson
Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

Excerpt:

The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

 

From DSC:
Yet another example of the need for the legislative and legal realms to try and catch up here.

The legal realm needs to try and catch up with the exponential pace of technological change

 

Many Americans aren’t aware they’re being tracked with facial recognition while shopping  — from techradar.com by Anthony Spadafora
You’re not just on camera, you’re also being tracked

Excerpt:

Despite consumer opposition to facial recognition, the technology is currently being used in retail stores throughout the US according to new research from Piplsay.

While San Francisco banned the police from using facial recognition back in 2019 and the EU called for a five year ban on the technology last year, several major retailers in the US including Lowe’s, Albertsons and Macy’s have been using it for both fraud and theft detection.

From DSC:
I’m not sure how prevalent this practice is…and that’s precisely the point. We don’t know what all of those cameras are actually doing in our stores, gas stations, supermarkets, etc. I put this in the categories of policy, law schools, legal, government, and others as the legislative and legal realm need to scramble to catch up to this Wild Wild West.

Along these lines, I was watching a portion of 60 minutes last night where they were doing a piece on autonomous trucks (reportedly to hit the roads without a person sometime later this year). When asked about oversight, there was some…but not much.

Readers of this blog will know that I have often wondered…”How does society weigh in on these things?”

Along these same lines, also see:

  • The NYPD Had a Secret Fund for Surveillance Tools — from wired.com by Sidney Fussell
    Documents reveal that police bought facial-recognition software, vans equipped with x-ray machines, and “stingray” cell site simulators—with no public oversight.
 
© 2025 | Daniel Christian