Legal Tech Broke Investment Record in 2019 as Sector Matures — from biglawbusiness.com by Sam Skolnik

Excerpts:

  • Investments in legal tech soared past $1 billion in 2019
  • Key legal tech conference boasted record attendance

Legal technology deals and investments stayed on a fast track in 2019 as the sector becomes increasingly relevant to how Big Law firms and corporate legal divisions operate. Legal tech investments flew past the $1 billion mark by the end of the third quarter. It hit that mark for the first time the year before.

Also see:

“E-discovery sits at the intersection of two industries not known for diversity: legal and high-tech. Despite what can feel like major wins, statistics paint a bleak picture—most U.S. lawyers are white, managing partners are primarily male, and only 2% of partners in major firms are black; leadership at e-discovery companies has historically reflected this demographic. The next decade will see a major shift in focus for leadership and talent development at e-discovery providers as we join law firms and corporate legal departments in putting our money where our mouths are and deliver recruitment and retention programs that fight discrimination and actively recruit, retain, and promote women, minority, and underserved talent.” 

Sarah Brown, director of marketing, Inventus

 

7 Artificial Intelligence Trends to Watch in 2020 — from interestingengineering.com by Christopher McFadden

Excerpts:

Per this article, the following trends were listed:

  1. Computer Graphics will greatly benefit from AI
  2. Deepfakes will only get better, er, worse
  3. Predictive text should get better and better
  4. Ethics will become more important as time goes by
  5. Quantum computing will supercharge AI
  6. Facial recognition will appear in more places
  7. AI will help in the optimization of production pipelines

Also, this article listed several more trends:

According to sources like The Next Web, some of the main AI trends for 2020 include:

  • The use of AI to make healthcare more accurate and less costly
  • Greater attention paid to explainability and trust
  • AI becoming less data-hungry
  • Improved accuracy and efficiency of neural networks
  • Automated AI development
  • Expanded use of AI in manufacturing
  • Geopolitical implications for the uses of AI

Artificial Intelligence offers great potential and great risks for humans in the future. While still in its infancy, it is already being employed in some interesting ways.

According to sources like Forbes, some of the next “big things” in technology include, but are not limited to:

  • Blockchain
  • Blockchain As A Service
  • AI-Led Automation
  • Machine Learning
  • Enterprise Content Management
  • AI For The Back Office
  • Quantum Computing AI Applications
  • Mainstreamed IoT

Also see:

Artificial intelligence predictions for 2020: 16 experts have their say — from verdict.co.uk by Ellen Daniel

Excerpts:

  • Organisations will build in processes and policies to prevent and address potential biases in AI
  • Deepfakes will become a serious threat to corporations
  • Candidate (and employee) care in the world of artificial intelligence
  • AI will augment humans, not replace them
  • Greater demand for AI understanding
  • Ramp up in autonomous vehicles
  • To fully take advantage of AI technologies, you’ll need to retrain your entire organisation
  • Voice technologies will infiltrate the office
  • IT will run itself while data acquires its own DNA
  • The ethics of AI
  • Health data and AI
  • AI to become an intrinsic part of robotic process automation (RPA)
  • BERT will open up a whole new world of deep learning use cases

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of what’s in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

 

 

Don’t trust AI until we build systems that earn trust — from economist.com
Progress in artificial intelligence belies a lack of transparency that is vital for its adoption, says Gary Marcus, coauthor of “Rebooting AI”

Excerpts:

Mr Marcus argues that it would be foolish of society to put too much stock in today’s AI techniques since they are so prone to failures and lack the transparency that researchers need to understand how algorithms reached their conclusions.

As part of The Economist’s Open Future initiative, we asked Mr Marcus about why AI can’t do more, how to regulate it and what teenagers should study to remain relevant in the workplace of the future.

Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other field. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.

The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high. It’s fine if autotagging people in photos turns out to be only 90 percent reliable—if it is just about personal photos that people are posting to Instagram—but it better be much more reliable when the police start using it to find suspects in surveillance photos.

 

From the American Bar Association: 2019 Solo and Small Firm Technology Usage Report

2019 Solo & Small Firm — from the American Bar Association (ABA)

Excerpt:

Solos and small firms are in a unique position to leverage technology in order to become more sustainable and competitive with larger firms and alternative legal service providers—they have the ability to be more agile than larger firms because of their size. There are numerous technology options available and many are solo and small firm budget-friendly. Consumers are demanding convenience, transparency, and efficiency from the legal profession. Attorneys must be adaptable to this driving force. The legal profession has some of the highest rates of addiction, depression, and suicide. Women are leaving the profession in droves and lawyers are struggling to find work-life balance. Technology use can reduce human error, increase efficiency, reduce overhead, provide flexibility, enhance attorney well-being, and better meet client needs.

 

Virtual access to legal assistance becoming mainstream is hopefully not far off!

From DSC:
Along these lines, we’ll likely see more bots and virtual legal assistants that we will be able to ask questions of.

#A2J #AccessToJustice #legal #lawyers #society #legaltech #bots #videoconferencing #AI #bots #VirtualAssistants

Along these lines, also see:

Innovative and inspired: How this US law school is improving access to justice — from studyinternational.com

Excerpt:

Though court and other government websites in the US provide legal information, knowing what to search for and understanding legal jargon can be difficult for lay people.

Spot, software that is being developed at the LIT Lab, aims to fix this.

“You know you have a housing problem. But very few people think about their housing problems in terms of something like constructive eviction,” explains David Colarusso, who heads the LIT Lab. “The idea is to have the tool be able to spot those issues based upon people’s own language.”

Developed by Colarusso and students, Spot uses a machine-based algorithm to understand legal queries posed by lay persons. With Spot, entering a question in plain English like “My apartment is so moldy I can’t stay there anymore. Is there anything I can do?” brings up search results that would direct users to the right legal issue. In this case, the query is highly likely to be related to a housing issue or, more specifically, to the legal term “constructive eviction.”

 

Lastly, here’s an excerpt from INSIGHT: What’s ‘Innovative’ in BigLaw? It’s More Than the Latest Tech Tools — from news.bloomberglaw.com by Ryan Steadman and Mark Brennan

Top Innovation Factors for Success

  • The first step is always to observe and listen.
  • True innovation is about rigorously defining a client problem and then addressing it through a combination of workflow process, technology, and people.
  • Leave aside the goal of wholesale transformation and focus instead on specific client use cases.

Before revving the engines in the innovation process, the safety check comes first. Successful innovation requires a deliberate, holistic approach that takes into consideration people, process, and technology. Firms and vendors that listen and learn before implementing significant change will stand apart from competitors—and help ensure long-term success.

 

Delta Model Lawyer: Lawyer Competencies for the Computational Age — from law.mit.edu by Caitlin “Cat” Moon
Technology changes the ways that people interact with one another. As a result, the roles and competencies required for many professions are evolving. Law is no exception. Cat Moon offers the Delta Model as a tool for legal professionals to understand how adapt to these changes.

Excerpt:

The [law] schools must begin training the profession to cope with and understand computers. […] Minimizing the pain and problems which may be caused by computer-created unknowns is a responsibility of the profession.

 

Accessibility at a Crossroads: Balancing Legal Requirements, Frivolous Lawsuits, and Legitimate Needs — from er.educause.edu by Martin LaGrow

Excerpt:

Changes in legal requirements for IT accessibility have prompted some to pursue self-serving legal actions. To increase access to users of all abilities, colleges and universities should articulate their commitment to accessibility and focus on changing institutional culture.

 

The future of law and computational technologies: Two sides of the same coin — from law.mit.edu by Daniel Linna
Law and computation are often thought of as being two distinct fields. Increasingly, that is not the case. Dan Linna explores the ways a computational approach could help address some of the biggest challenges facing the legal industry.

Excerpt:

The rapid advancement of artificial intelligence (“AI”) introduces opportunities to improve legal processes and facilitate social progress. At the same time, AI presents an original set of inherent risks and potential harms. From a Law and Computational Technologies perspective, these circumstances can be broadly separated into two categories. First, we can consider the ethics, regulations, and laws that apply to technology. Second, we can consider the use of technology to improve the delivery of legal services, justice systems, and the law itself. Each category presents an unprecedented opportunity to use significant technological advancements to preserve and expand the rule of law.

For basic legal needs, access to legal services might come in the form of smartphones or other devices that are capable of providing users with an inventory of their legal rights and obligations, as well as providing insights and solutions to common legal problems. Better yet, AI and pattern matching technologies can help catalyze the development of proactive approaches to identify potential legal problems and prevent them from arising, or at least mitigate their risk.

We risk squandering abundant opportunities to improve society with computational technologies if we fail to proactively create frameworks to embed ethics, regulation, and law into our processes by design and default.

To move forward, technologists and lawyers must radically expand current notions of interdisciplinary collaboration. Lawyers must learn about technology, and technologists must learn about the law.

 

 

Considering AI in hiring? As its use grows, so do the legal implications for employers. — from forbes.com by Alonzo Martinez; with thanks to Paul Czarapata for his posting on Twitter on this

Excerpt:

As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.

Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.

But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.

 

Technology is increasingly being used to provide legal services, which demands a new breed of innovative lawyer for the 21st century. Law schools are launching specialist LL.M.s in response, giving students computing skills — from llm-guide.com by Seb Murray

Excerpts:

Junior lawyers at Big Law firms have long been expected to work grueling hours on manual and repetitive tasks like reviewing documents and doing due diligence. Increasingly, such work is being undertaken by machines – which can be faster, cheaper and more accurately than humans. This is the world of legal technology – the use of technology to provide legal services.

The top law schools recognize the need to train not just excellent lawyers but tech-savvy ones too, who understand the application of technology and its impact on the legal market. They are creating specialist courses for those who want to be more involved with the technology used to deliver legal advice.

“Technology is changing the way we live, work and interact,” says Alejandro Touriño, co-director of the course. “This new reality demands a new breed of lawyers who can adapt to the emerging paradigm. An innovative lawyer in the 21st century needs not only to be excellent in law, but also in the sector where their clients operate and the technologies they deal with.” 

The rapid growth in Legal Tech LL.M. offerings reflects a need in the professional world. Indeed, law firms know they need to become digital businesses in order to attract and retain clients and prospective employees.

 

From DSC:
In case it’s helpful or interesting, a person interested in a legal career needs to first get a Juris Doctor (J.D.) Degree, then pass the Bar. At that point, if they want to expand their knowledge in a certain area or areas, they can move on to getting an LL.M. Degree if they choose to.

As in the world of higher ed and also in the corporate training area, I have it that the legal field will need to move more towards the use of teams of specialists. There will be several members of the team NOT having law degrees. For example, technologists, programmers, user experience designers, etc. should be teaming up with lawyers more and more these days.

 

Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle

Excerpts (emphasis DSC):

Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

 

From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.

By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?

 

Also see:

  • Opinion: AI for good is often bad — from wired.com by Mark Latonero
    Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
 

AI innovators should be listening to kids — from wired.com by Urs Gasser, executive director of the Berkman Klein Center for Internet & Society at Harvard University, the principal investigator of the Center’s Youth and Media project, and a professor at Harvard Law School.
Input from the next generation is crucial when it comes to navigating the challenges of new technologies.

Excerpts:

With another monumental societal transformation on the horizon—the rise of artificial intelligence—we have an opportunity to engage the power and imagination of youth to shape the world they will inherit. Many of us were caught off guard by the unintended consequences of the first wave of digital technologies, from mass surveillance to election hacking. But the disruptive power of the internet to date only sets the stage for the even more radical changes AI will produce in the coming decades.

Instead of waiting for the youth to respond to the next crisis, we should proactively engage them as partners in shaping our AI-entangled future.

Young people have a right to participate as we make critical choices that will determine what kind of technological world we leave for them and future generations. They also have unique perspectives to contribute as the first generation to grow up surrounded by AI shaping their education, health, social lives, leisure, and career prospects.

Youth have the most at stake, and they also have valuable perspectives and experiences to contribute. If we want to take control of our digital future and respond effectively to the disruptions new technology inevitably brings, we must listen to their voices.

 

Going to court without a lawyer is new normal for U.S. litigants — from msn.com by Gwen Everett

Excerpt:

A fair shot at justice is a bedrock value of the American legal system, yet litigants who represent themselves against attorneys are unlikely to win their cases or settle on beneficial terms, according to Bonnie Hough, an attorney at the Judicial Council of California, the rule-making arm of the state’s court system. This reinforces the reality that America is split into two camps — the haves and the have-no-lawyers.

 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

Explore the transformative power of education through the eyes of a dozen incarcerated men and women trying to earn college degrees – and a chance at new beginnings – from one of the country’s most rigorous prison education programs.

 

Also see:

  • College Behind Bars: The Necessity of Running A College Inside Prisons — from evolllution.com by Michael Budke
    “Our numbers for the College Inside program at Chemeketa Community College are even more striking. Since the program’s inception in 2007—we existed prior to the SCP grant—the recidivism rate for our 293 graduates is only 4.8%.”
 
© 2024 | Daniel Christian