From DSC:
I post the following item because I’ve often wondered how law schools should best handle/address the area of emerging technologies. It’s not just newly-minted lawyers that need to be aware of these technologies’ potential pros and cons — and the developing laws around them. It’s also judges, legislators, politicians, C-Suites, and others who need to keep a pulse check on these things.

Hermès Sues NFT Creator Over ‘MetaBirkin’ Sales — from by Robert Williams
The French leather goods giant alleges trademark infringement and dilutive use of its iconic Birkin name.

Excerpt (emphasis DSC):

The complaint, which was first reported on The Fashion Law, raises questions about how trademark protections for real-world items will be enforced in the digital realm as commercial activity heats up in the metaverse. Brands including Balenciaga and Nike are experimenting with virtual fashion. Non-fungible tokens, or NFTs (unique digital assets authenticated using blockchain technology), depicting fashion items have sold for millions in recent months.

 

From DSC:
Should Economics classes be looking at the idea of a digital dollar?


The battle for control of the digital dollar — from protocol.com

Excerpt:

After months of delay, the Federal Reserve’s much-awaited report on a digital dollar could be out soon. Federal Reserve Chairman Jerome Powell told the Senate Banking Committee Tuesday that the white paper on a planned central bank digital currency (CBDC) is “ready to go,” possibly in weeks.

But don’t expect a detailed blueprint for an American CBDC. The white paper will be more of “an exercise in asking questions” about how the digital dollar should work, Powell said.

And there are plenty of questions. Some have already sparked a heated debate: How should consumers gain access to the digital dollar? And how much control should the Fed have?

Will a digital dollar lead to “digital authoritarianism”? That’s one of the biggest fears about a digital currency: a Big Brother future where the Fed has the power to track how people spend their money.

 

Feds’ spending on facial recognition tech expands, despite privacy concerns — from by Tonya Riley

Excerpt:

The FBI on Dec. 30 signed a deal with Clearview AI for an $18,000 subscription license to the company’s facial recognition technology. While the value of the contract might seem just a drop in the bucket for the agency’s nearly $10 billion budget, the contract was significant in that it cemented the agency’s relationship with the controversial firm. The FBI previously acknowledged using Clearview AI to the Government Accountability Office but did not specify if it had a contract with the company.

From DSC:
What?!? Isn’t this yet another foot in the door for Clearview AI and the like? Is this the kind of world that we want to create for our kids?! Will our kids have any privacy whatsoever? I feel so powerless to effect change here. This technology, like other techs, will have a life of its own. Don’t think it will stop at finding criminals. 

AI being used in the hit series called Person of Interest

This is a snapshot from the series entitled, “Person of Interest.
Will this show prove to be right on the mark?

Addendum on 1/18/22:
As an example, check out this article:

Tencent is set to ramp up facial recognition on Chinese children who log into its gaming platform. The increased surveillance comes as the tech giant caps how long kids spend gaming on its platform. In August 2021, China imposed strict limits on how much time children could spend gaming online.

 

Isaiah 1:17

Learn to do right; seek justice.
    Defend the oppressed.
Take up the cause of the fatherless;
    plead the case of the widow.

From DSC:
This verse especially caught my eye as we have severe access to justice issues here in the United States.

 

Education Has Been Hammering the Wrong Nail. We Have to Focus on the Early Years. — from edsurge.com by Isabelle Hau
This article is part of the guide Survival Mode: Educators Reflect on a Tough 2021 and Brace for the Future.

Excerpt:

Moreover, children who are expelled in preschool or early elementary are 10 times more likely to be incarcerated. Those later societal costs might have been avoided if the children are provided the nurturing attention in their early years to regulate their emotions and buffer toxic stress, which often results from exposure to trauma.

The value of the human potential unlocked by early education and social and emotional interventions is more difficult to assess: it is arguably limitless.


 

California Lawmakers Ignore Data in Calls to Restrict the Expansion of Legal Services — from iaals.du.edu

Excerpt:

Earlier this week, two California lawmakers spoke up about these issues. Cheryl Miller, writing for Law.com’s The Recorder, reports: “The chairs of California’s two legislative judiciary committees this week accused the state bar of ‘divert[ing] its attention from its core mission of protecting the public’ by pursuing proposals to allow nonlawyers to offer a limited range of legal services.”

California Supreme Court Justice Tani Cantil-Sakauye called the criticism “not surprising,” and we would agree. After all, it is the same message we’ve heard time and again from opponents to any real, impactful change within the legal profession. And the assumptions underlying that message have been discredited by our organization and many other scholars, researchers, and even regulators in other countries. So it is frustrating just how widespread and misleading opponent claims can be.

 

 

 

AI bots to user data: Is there space for rights in the metaverse? — from news.trust.org by Sonia Elks
In a virtual world populated by digital characters, privacy and property rights face unprecedented challenges, campaigners say

From virtual goods to AI-powered avatars that can be hired out by companies, a fast-growing digital world is pushing ownership and privacy rights into unchartered territory.

NYC Targets Artificial Intelligence Bias in Hiring Under New Law — from news.bloomberglaw.com by Erin Mulvaney

New York City has a new law on the books—one of the boldest measures of its kind in the country—that aims to curb hiring bias that can occur when businesses use artificial intelligence tools to screen out job candidates.

Employers in the city will be banned from using automated employment decision tools to screen job candidates, unless the technology has been subject to a “bias audit” conducted a year before the use of the tool.

‘Innovation Averse’ Law Schools Risk Missing Out on the Legal Industry’s Regulatory Renaissance — from law.com

“If law professors and legal educators and law school administrators aren’t in tune with how the practice of law is changing, not just in terms of how you deliver legal services, but these new areas that are emerging, then we’re doing a great disservice to our students,” said April Dawson, associate dean of Technology and Innovation at North Carolina Central University School of Law, during the “Redesigning Legal” speaker series panel discussion this week.

 

From DSC:
As I looked at the article below, I couldn’t help but wonder…what is the role of the American Bar Association (ABA) in this type situation? How can the ABA help the United States deal with the impact/place of emerging technologies?


Clearview AI will get a US patent for its facial recognition tech — from engadget.com by J. Fingas
Critics are worried the company is patenting invasive tech.

Excerpt:

Clearview AI is about to get formal acknowledgment for its controversial facial recognition technology. Politico reports Clearview has received a US Patent and Trademark Office “notice of allowance” indicating officials will approve a filing for its system, which scans faces across public internet data to find people from government lists and security camera footage. The company just has to pay administrative fees to secure the patent.

In a Politico interview, Clearview founder Hoan Ton-That claimed this was the first facial recognition patent involving “large-scale internet data.” The firm sells its tool to government clients (including law enforcement) hoping to accelerate searches.

As you might imagine, there’s a concern the USPTO is effectively blessing Clearview’s technology and giving the company a chance to grow despite widespread objections to its technology’s very existence. 

Privacy, news, facial recognition, USPTO, internet, patent,
Clearview AI, surveillance, tomorrow, AI, artificial intelligence

 

Ohio State U. Unveils a Plan for All Students to Graduate Debt-Free — from chronicle.com by Eric Kelderman

Excerpt:

Nearly half of all undergraduates at Ohio State University take out loans to help pay the costs of attending college, borrowing an average of more than $27,000. Kristina M. Johnson, the new president of the land-grant university, wants to reduce that proportion to zero.

Johnson announced a plan on Friday, at her investiture, to reach that goal within a decade.

Instead of offering students federal direct loans as part of their financial-aid packages, the university will use a combination of grants, internships, and opportunities to assist with research.

“People without family means, which includes many of the country’s minority students, start their adult lives in a financial hole,” Johnson said during her investiture speech. “I want all of our graduates to be free to say yes to every great opportunity that comes their way.”

 

 

AI bots to user data: Is there space for rights in the metaverse? — from reuters.com by Sonia Elks

Summary

  • Facebook’s ‘metaverse’ plans fuel debate on virtual world
  • Shared digital spaces raise privacy, ownership questions
  • Rights campaigners urge regulators to widen safeguards

 

 

Legal education as a key stakeholder in legal services reform (276) — from legalevolution.org by Dan Rodriguez
To date, this highly influential stakeholder has had very little to say.

Excerpt:

The fierce and fascinating struggle underway in the American states over legal services reform brings to the table a large collection of interest groups.  These groups include law firms, legal aid organizations, entrepreneurs who might benefit financially from the liberalization of entry rules, and of course the gatekeeper entities, including state bar authorities and the state supreme courts, whose decisions are crucial to the evolution and shape of reform.  See Posts 239 (beginning of a four-part series on serious challenges of bar federalism).

The identity of these specific groups may differ from state to state, as the legal ecosystem has contours often tailored to a particular state’s history and objectives, but the configuration of stakeholders has some rather common elements.

What remains somewhat opaque in this robust and interconnected battle over the reform of legal services is the voice of legal educators and the law schools.  These are, after all, the places in which future lawyers are educated and professional values are instilled.  It is had to imagine a more fertile and opportune time to discuss the ambitions and philosophies of this next generation of legal professionals.

 
 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

From DSC:
The articles below made me wonder…what will lawyers, judges, and legislators need to know about Bitcoin, Ethereum, and other cryptocurrencies? (#EmergingTechnologies)

 
© 2022 | Daniel Christian