Law’s Looming Skills Crisis — from forbes.com by Mark Cohen

Excerpts (emphasis DSC):

The good news is that legal professionals have more career paths, lifestyle options, and geographic nimbleness than ever before. The bad news is that relatively few in the industry are prepared to fill the new roles.

A growing list of clients demand transformed legal services. What does that mean? Legal professionals must meld law, technology, and business and apply principles of digital transformation to the legal function. They must be proactive, data-driven, client-centric, and collaborative*. They must appreciate that clients want solutions to business challenges, not legal tomes.

Legal culture responds to the warp speed change—when it does at all– with buzzwords, denial, and self-congratulation. The packed calendar of industry award dinners celebrating pioneers, innovation, diversity, and other self-declared advances belies data exposing law’s dreadful scorecard on diversity, gender pay equality, advancement opportunities for non-white males, and other legal guild cultural holdovers. Then there’s the disconnect between the lawyer and client view of industry performance. Law’s net promoter score lags other professions and almost all industries. Legal culture needs a jolt; client-centricity, the ability to respond rapidly and effectively to new risk factors and challenges, data-driven judgments, and agile workforces are among law’s transformational musts.

Legal culture is slow to embrace data, technology, new delivery models, multidisciplinary practice, regulatory reform, collaboration, diversity, gender pay equality, the distinction between the practice of law and the delivery of legal services, client-centricity, and digital transformation. Law is rooted in precedent; it looks to the past to prepare for the future. That is no longer the world we live in.

 

From DSC:
*And I would add the ability to look into the future, develop some potential scenarios and some responses to those potential scenarios…in other words, practice some methods used in futurism.

I would encourage my colleagues within the legal education world — which includes the American Bar Association (ABA) — but also judges, lawyers, legislators, attorney generals, public defenders, and many others to realize that we need to massively pick up the pace if we are to help tame the wild west of emerging technologies these days!

 

 

 

 

[ABA] Council enacts new bar passage standard for law schools — from americanbar.org

Excerpt (emphasis DSC):

On May 17, the Council of the ABA Section of Legal Education and Admissions to the Bar approved a major change in the bar passage standard, known as 316, that would require 75 percent of a law school’s graduates who sit for the bar to pass it within two years. The change takes effect immediately although schools falling short of the standard would have at least two years to come into compliance.

Twice since 2017, the ABA policy-making House of Delegates has voted against the change, as some delegates feared it would have an adverse effect on law schools with significant minority enrollment. But under ABA rules and procedures, the Council, which is recognized by the U.S. Department of Education as the national accreditor of law schools, has the final say on accreditation matters.

 

Also see:

  • ABA’s Tougher Bar Pass Rule for Law Schools Applauded, Derided — from law.com by Karen Sloan
    The American Bar Association’s new standard could increase pressure on jurisdictions like California with high cut scores to lower that threshold. It could also add momentum to the burgeoning movement to overhaul the bar exam itself.

“Either the ABA Council simply ignored the clear empirical evidence that the new bar standard will decrease diversity in the bar, or it passed the new standard with the hope that states, like California, that have unreasonably high bar cut scores will lower those metrics in order to ameliorate the council’s action,” Patton said.

 

At that January meeting, former ABA President Paulette Brown, the first African-American woman to hold that position, called the proposed change “draconian.”

“I know and understand fully that the [ABA] council has the right to ignore what we say,” she said. “That does not absolve us of our responsibility to give them a very clear and strong message that we will not idly stand by while they decimate the diversity in the legal profession.”

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

People, Power and Technology: The Tech Workers’ View — from doteveryone.org.uk

Excerpt:

People, Power and Technology: The Tech Workers’ View is the first in-depth research into the attitudes of the people who design and build digital technologies in the UK. It shows that workers are calling for an end to the era of moving fast and breaking things.

Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society. This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.

While Silicon Valley CEOs have tried to reverse the “techlash” by showing their responsible credentials in the media, this research shows that workers:

    • need guidance and skills to help navigate new dilemmas
    • have an appetite for more responsible leadership
    • want clear government regulation so they can innovate with awareness

Also see:

  • U.K. Tech Staff Quit Over Work On ‘Harmful’ AI Projects — from forbes.com by Sam Shead
    Excerpt:
    An alarming number of technology workers operating in the rapidly advancing field of artificial intelligence say they are concerned about the products they’re building. Some 59% of U.K. tech workers focusing on AI have experience of working on products that they felt might be harmful for society, according to a report published on Monday by Doteveryone, the think tank set up by lastminute.com cofounder and Twitter board member Martha Lane Fox.

 

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

From DSC:
Re: the Learning from the Living [Class] Room vision of a next gen learning platform

 

Learning from the Living Class Room

 

…wouldn’t it be cool if you could use your voice to ask your smart/connected “TV” type of device:

“Show me the test questions for Torts I from WMU-Cooley Law School. Cooley could then charge $0.99 for these questions.”

Then, the system knows how you did on answering those questions. The ones you got right, you don’t get asked to review as often as the ones you got wrong. As you get a question right more often, the less you are asked to answer it.

You sign up for such streams of content — and the system assesses you periodically. This helps a person keep certain topics/information fresh in their memory. This type of learning method would be incredibly helpful for students trying to pass the Bar or other types of large/summative tests — especially when a student has to be able to recall information that they learned over the last 3-5 years.

Come to think of it…this method could help all of us in learning new disciplines/topics throughout our lifetimes. Sign up for the streams of content that you want to learn more about…and drop the (no-longer relevant) subscriptions as needed..

 

We need to tap into streams of content in our next gen learning platform

 

5 new legal professions under the impact of legal tech — from medium.com by Valentin Pivovarov

Excerpt:

Professionals who work in the field of jurisprudence should take into account the coming changes. It is necessary to realize the significance of the transition from an industrial society operating with printed texts to an information society operating with Internet-oriented resources. Legal innovations implementation will allow specialists to change their routines and make their work more effective. It is already important to retrain and look for new opportunities. And it doesn’t matter whether you work in a large law company or your own legal tech startup. The game is changing.

 

2019 State of Corporate Law Departments Report — from Thomson Reuters Legal Executive Institute, the International Bar Association (IBA), the Corporate Legal Operations Consortium (CLOC), and UK-based legal research firm Acritas

 

 

Because many corporate law departments are faced with dynamic and wide-ranging problems, the solutions require a highly diverse set of skills and capabilities. Teams of lawyers alone are no longer enough to solve all problems in optimal ways. That’s why law departments must provide legal support to corporations that not only enables them to maximize their competitive advantage, but also safeguards the organization against unnecessary risk.

The State of Corporate Law Departments 2019 — a new report that was recently published by Thomson Reuters Legal Executive Institute, the International Bar Association (IBA), the Corporate Legal Operations Consortium (CLOC), and UK-based legal research firm Acritas — examines the landscape for corporate law departments and explains that in order to maximize the value of legal services being delivered, corporate law departments must make their best efforts to improve the impact of those legal services while at the same time reducing the cost of those services.

Excerpts (emphasis DSC):

However, this report suggests that in order to maximize the value delivered, it’s time to pay as much attention to improving the impact of legal services the corporate law departments are delivering as it is to reducing the cost of those services.

Indeed, today’s legal problems are dynamic and wide-ranging, and the solutions require a highly diverse set of skills and capabilities. Teams of lawyers alone are no longer enough to solve problems in optimal ways. All types of professional need to work together collaboratively, often from different organizations, and they need the support of modern working processes and systems.

Innovative law departments and innovative law firms score significantly higher across all key performance areas, including the ultimate measures of quality and value.

Innovation incorporates a whole host of different areas, such as embracing legal technologies, utilizing expert professionals holistically with lawyers, overhauling work processes and pricing models, and building collaborative partnerships between in-house teams and their outside law firms and alternative legal services suppliers.

To that end, the report identified a number of key levers that corporate law departments can use to create a higher performing legal function and enhance the impact that their departments makes on the overall success of the organization.

The solution to this challenge reinforces the key findings of this report as a whole — the need to tap into a diverse range of skills beyond legal expertise, to access new technologies, and to report on the progress made.

 

 

DARPA is reportedly eyeing a high-tech contact lens straight out of ‘Mission: Impossible’ — from taskandpurpose.com by Jared Keller

 

Just because we can...does not mean we should.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) is reportedly interested in a new wirelessly-connected contact lens recently unveiled in France, the latest in the agency’s ongoing search for small-scale technology to augment U.S. service members’ visual capabilities in the field.

 

From DSC:
We may not be there yet (and in my mind, that’s a good thing). But when this tech gets further developed and gets its foot in the door — military style — it may then expand its reach and scope. Then it gets integrated into other areas of society. If many people were very uncomfortable having someone walk in a public place wearing/using a pair of Google Glasses, how will they/we feel about this one? Speaking for myself, I don’t like it.

 

An algorithm wipes clean the criminal pasts of thousands — from bbc.com by Dave Lee

Excerpt:

This month, a judge in California cleared thousands of criminal records with one stroke of his pen. He did it thanks to a ground-breaking new algorithm that reduces a process that took months to mere minutes. The programmers behind it say: we’re just getting started solving America’s urgent problems.

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

Legal Services Innovation Index

 

Legal Services Innovation Index

Excerpts:

“This index should also be a resource for law schools and law students. It will help law schools better understand the evolution of the legal landscape, which will help them better prepare their students for the future. Law students can use this index to learn more about how the profession is changing and the knowledge and skills that they should develop for long-term success. The index also aims to provide law students information about the law firms recruiting them as well as a framework for assessing each law firm’s strategies for the future. Again, I caution that this index is simply an initial attempt to measure indicators of innovation and various weaknesses have been acknowledged. That said, the index and this initial information provides a starting point for very important discussions.”

 

“The problem to be solved is the lack of access to legal services. Experts estimate that approximately 80 percent of the impoverished and 50 percent of the middle class lack access to legal services.”

 

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 
© 2024 | Daniel Christian