Beijing to judge every resident based on behavior by end of 2020 — from bloomberg.com

  • China capital plans ‘social credit’ system by end of 2020
  • Citizens with poor scores will be ‘unable to move’ a step

Excerpt:

China’s plan to judge each of its 1.3 billion people based on their social behavior is moving a step closer to reality, with  Beijing set to adopt a lifelong points program by 2021 that assigns personalized ratings for each resident.

The capital city will pool data from several departments to reward and punish some 22 million citizens based on their actions and reputations by the end of 2020, according to a plan posted on the Beijing municipal government’s website on Monday. Those with better so-called social credit will get “green channel” benefits while those who violate laws will find life more difficult.

The Beijing project will improve blacklist systems so that those deemed untrustworthy will be “unable to move even a single step,” according to the government’s plan.

 

From DSC:
Matthew 18:21-35 comes to mind big time here! I’m glad the LORD isn’t like this…we would all be in trouble.

 

 

Mama Mia It’s Sophia: A Show Robot Or Dangerous Platform To Mislead? — from forbes.com by Noel Sharkey

Excerpts:

A collective eyebrow was raised by the AI and robotics community when the robot Sophia was given Saudia citizenship in 2017 The AI sharks were already circling as Sophia’s fame spread with worldwide media attention. Were they just jealous buzz-kills or is something deeper going on? Sophia has gripped the public imagination with its interesting and fun appearances on TV and on high-profile conference platforms.

Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike. In an AI-hungry world where decisions about the application of the technologies will impact significantly on our lives, Sophia’s creators may have crossed a line. What might the negative consequences be? To get answers, we need to place Sophia in the context of earlier show robots.

 

 

A dangerous path for our rights and security
For me, the biggest problem with the hype surrounding Sophia is that we have entered a critical moment in the history of AI where informed decisions need to be made. AI is sweeping through the business world and being delegated decisions that impact significantly on peoples lives from mortgage and loan applications to job interviews, to prison sentences and bail guidance, to transport and delivery services to medicine and care.

It is vitally important that our governments and policymakers are strongly grounded in the reality of AI at this time and are not misled by hype, speculation, and fantasy. It is not clear how much the Hanson Robotics team are aware of the dangers that they are creating by appearing on international platforms with government ministers and policymakers in the audience.

 

 

Ahead of the Curve: Coffee Shop Clinic — from law.com by Karen Sloan (sorry…an account/sign-in is required with this article)
At William and Mary Law School’s Military Mondays program, veterans can get benefits assistance at the local Starbucks

Excerpt:

Thus far, Military Mondays has helped 369 veterans with benefit claims, with five or six sessions each semester. It has become more comprehensive as well. A representative from the Virginia Department of Veterans Services typically attends now, meaning veterans can often file benefit claims on the spot. The state veterans services workers can also look through a veterans’ file to find necessary information, Stone told me.“It has gone really well,” he said. “It has not only helped a lot of veterans, but it has provided students a great educational experience by allowing them to meet face-to-face with veterans.

 

 

Caselaw Access Project (CAP) Launches API and Bulk Data Service — from the Library Innovation Lab at the Harvard Law School Library by Kelly Fitzpatrick

Excerpt:

[On 10/29/18] the Library Innovation Lab at the Harvard Law School Library is excited to announce the launch of its Caselaw Access Project (CAP) API and bulk data service, which puts the full corpus of published U.S. case law online for anyone to access for free.

Between 2013 and 2018, the Library digitized over 40 million pages of U.S. court decisions, transforming them into a dataset covering almost 6.5 million individual cases. The CAP API and bulk data service puts this important dataset within easy reach of researchers, members of the legal community and the general public.

To learn more about the project, the data and how to use the API and bulk data service, please visit case.law.

 

 

Also see:

 

 

 

Teaching technology today: One law school’s innovative offerings — from abovethelaw.com by David Lat
Lawyers of the future, regardless of practice area, need to be proficient in legal technology.

Excerpts (emphasis DSC):

Technology is transforming the practice and profession of law — and legal education must evolve accordingly. Across the country, law schools are launching and expanding research centers, clinics, and course offerings focused on legal technology.

Earlier this month, I spoke with Judge Prudenti again, to check in on her efforts. I asked her: Why is legal-tech proficiency so important today?

“I’m a true believer,” Judge Prudenti told me. “I have seen, up close and personal, how technology has changed the legal marketplace and the practice of law — in the private sector, in the public sector, and in the courtroom.”

In private law firms, large or small, lawyers are engaged in e-discovery, e-filing, and e-billing — as a necessity. In the public sector, technology is being used to bridge the justice gap, helping to deliver legal services to the unrepresented. And in the courtroom, Judge Prudenti’s former domain, technology has been revolutionary, with touch-screen monitors, iPads, and video screens being used to present and review evidence.

“Lawyers of the future, regardless of practice area, need to be proficient in legal technology,” Judge Prudenti said — which is why Hofstra Law has been focusing so intensely on its technology offerings.

 

Addendum on 10/31/18:

 

 

AI Now Law and Policy Reading List — from medium.com by the AI Now Institute

Excerpt:

Data-driven technologies are widely used in society to make decisions that affect many critical aspects of our lives, from health, education, employment, and criminal justice to economic, social and political norms. Their varied applications, uses, and consequences raise a number of unique and complex legal and policy concerns. As a result, it can be hard to figure out not only how these systems work but what to do about them.

As a starting point, AI Now offers this Law and Policy Reading List tailored for those interested in learning about key concepts, debates, and leading analysis on law and policy issues related to artificial intelligence and other emerging data-driven technologies.

 

New game lets players train AI to spot legal issues — from abajournal.com by Jason Tashea

Excerpt:

Got a free minute? There’s a new game that will help train an artificial intelligence model to spot legal issues and help close the access-to-justice gap.

Called Learned Hands—yes, it’s a pun—the game takes 75,000 legal questions posted on Reddit dealing with family, consumer, criminal and other legal issues and asks the user to determine what the issue is.

While conjuring up nightmares of the first-year in law school for many lawyers, David Colarusso says it’s for a good cause.

“It’s an opportunity for attorneys to take their downtime to train machine learning algorithms to help access-to-justice issues,” says Colarusso, director of Suffolk University Law School’s Legal Innovation and Technology (LIT) Lab and partner on this project with the Stanford Legal Design Lab.

 

From learnedhands.law.stanford.edu/legalIssues

When you play the game, you’ll be spotting if different legal issues are present in people’s stories. Some of these issues will be high level categories, and others will be more specific issues.

Here are the high level categories:

 

 

California enacts first law regulating Internet of Things devices — from iplawtrends.com by David Rice with thanks to my friend Justin Wagner for posting this on LinkedIn

Excerpt:

California has enacted the nation’s first law regulating Internet of Things (IoT) devices, which was signed by Governor Jerry Brown on September 28, 2018. IoT refers to the rapidly-expanding world of internet-connected objects such as home security systems, video monitors, enterprise devices that track packages and vehicles, health monitors, connected cars, smart city devices that manage traffic congestion, and smart meters for utilities.

IoT devices promise to bring efficiencies to a broad range of industries and improve lives. But these devices also collect vast troves of information, and this raises data security and privacy concerns. In 2016, a distributed denial of service (DDoS) attack on the internet infrastructure company Dyn was powered by millions of hacked IoT devices such as web cameras and connected refrigerators. Hackers have used baby monitors to view inside homes, with a prominent recent example being the widely-deployed Mi-Cam baby monitor. If hackers are able to get into critical IoT systems in first responder networks, then there could be public safety risks.

 

The law states that having a unique preprogrammed password for each IoT device or requiring the user to generate a new means of authentication before access to the device is granted for the first time is deemed to be a reasonable security feature.

 

 

 

10 jobs that are safe in an AI world — from linkedin.com by Kai-Fu Lee

Excerpts:

Teaching
AI will be a great tool for teachers and educational institutions, as it will help educators figure out how to personalize curriculum based on each student’s competence, progress, aptitude, and temperament. However, teaching will still need to be oriented around helping students figure out their interests, teaching students to learn independently, and providing one-on-one mentorship. These are tasks that can only be done by a human teacher. As such, there will still be a great need for human educators in the future.

Criminal defense law
Top lawyers will have nothing to worry about when it comes to job displacement. reasoning across domains, winning the trust of clients, applying years of experience in the courtroom, and having the ability to persuade a jury are all examples of the cognitive complexities, strategies, and modes of human interaction that are beyond the capabilities of AI. However, a lot of paralegal and preparatory work like document review, analysis, creating contracts, handling small cases, packing cases, and coming up with recommendations can be done much better and more efficiently with AI. The costs of law make it worthwhile for AI companies to go after AI paralegals and AI junior lawyers, but not top lawyers.

 

From DSC:
In terms of teaching, I agree that while #AI will help personalize learning, there will still be a great need for human teachers, professors, and trainers. I also agree w/ my boss (and with some of the author’s viewpoints here, but not all) that many kinds of legal work will still need the human touch & thought processes. I diverge from his thinking in terms of scope — the need for human lawyers will go far beyond just lawyers involved in crim law.

 

Also see:

15 business applications for artificial intelligence and machine learning — from forbes.com

Excerpt:

Fifteen members of Forbes Technology Council discuss some of the latest applications they’ve found for AI/ML at their companies. Here’s what they had to say…

 

 

 

ABA doubles online learning hours — page 6 of 40 from the National Jurist

Excerpt:

Law schools will have the option of offering up to 30 hours of online learning in their J.D. programs, doubling the previous limit of 15, under a new measure approved by the ABA in August. Also, for the first time, first year students will be permitted to take online classes — as many as 10 hours. The ABA has allowed a small number of variances in the past for schools to offer more robust online offerings. Mitchell Hamline School of Law in St. Paul, Minn., launched the nation’s first hybrid J.D. program in 2015. Southwestern Law School in Los Angeles and Syracuse University College of Law in upstate New York have received ABA approval for similar programs.

Such programs have been applauded because they allow for more non-traditional students to get law school educations. Students are only required to be on campus for short periods of time. That means they don’t have to live near the law school.

 

From DSC:
In 1998 Davenport University Online began offering 100% online-based courses. I joined them 3 years later, and I was part of a group of people who took DUO to the place where — when I left DUO in March 2007, we were offering ~50% of the total credit hours of the university in a 100% online-based format. Again, that was 15-20 years ago. DUO was joined by many other online-based programs from other community colleges and universities.

So what gives w/ the legal education area? 

Well…the American Bar Association (ABA) comes to mind.

The ABA has been very restrictive on the use of online learning. Mounting pressure surely must be on the ABA to allow all kinds of variances in this area. Given the need for legal education to better deal with the exponential pace of technological innovation, the ABA has a responsibility to society to up their game and become far more responsive to the needs of law students. 

 

 

Evaluating the impact of artificial intelligence on human rights — from today.law.harvard.edu by Carolyn Schmitt
Report from Berkman Klein Center for Internet & Society provides new foundational framework for considering risks and benefits of AI on human rights

Excerpt:

From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?

A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.

“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.

 

 

Also see:

  • Morality in the Machines — from today.law.harvard.edu by Erick Trickey
    Researchers at Harvard’s Berkman Klein Center for Internet & Society are collaborating with MIT scholars to study driverless cars, social media feeds, and criminal justice algorithms, to make sure openness and ethics inform artificial intelligence.

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

Prudenti: Law schools facing new demands for innovative education — from libn.com by A. Gail Prudenti

Excerpt (emphasis DSC):

Law schools have always taught the law and the practice thereof, but in the 21st century that is not nearly enough to provide students with the tools to succeed.

Clients, particularly business clients, are not only looking for an “attorney” in the customary sense, but a strategic partner equipped to deal with everything from project management to metrics to process enhancement. Those demands present law schools with both an opportunity for and expectation of innovation in legal education.

At Hofstra Law, we are in the process of establishing a new Center for Applied Legal Technology and Innovation where law students will be taught to use current and emerging technology, and to apply those skills and expertise to provide cutting-edge legal services while taking advantage of interdisciplinary opportunities.

Our goal is to teach law students how to use technology to deliver legal services and to yield graduates who combine exceptional legal acumen with the skill and ability to travel comfortably among myriad disciplines. The lawyers of today—and tomorrow—must be more than just conversant with other professionals. Rather, they need to be able to collaborate with experts in other fields to serve the myriad and intertwined interests of the client.

 

 

Also see:

Workforce of the future: The competing forces shaping 2030 — from pwc.com

Excerpt (emphasis DSC):

We are living through a fundamental transformation in the way we work. Automation and ‘thinking machines’ are replacing human tasks and jobs, and changing the skills that organisations are looking for in their people. These momentous changes raise huge organisational, talent and HR challenges – at a time when business leaders are already wrestling with unprecedented risks, disruption and political and societal upheaval.

The pace of change is accelerating.

 


Graphic by DSC

 

Competition for the right talent is fierce. And ‘talent’ no longer means the same as ten years ago; many of the roles, skills and job titles of tomorrow are unknown to us today. How can organisations prepare for a future that few of us can define? How will your talent needs change? How can you attract, keep and motivate the people you need? And what does all this mean for HR?

This isn’t a time to sit back and wait for events to unfold. To be prepared for the future you have to understand it. In this report we look in detail at how the workplace might be shaped over the coming decade.

 

 

 

From DSC:

Peruse the titles of the articles in this document (that features articles from the last 1-2 years) with an eye on the topics and technologies addressed therein! 

 

Artificial Intelligence (AI), virtual reality, augmented reality, robotics, drones, automation, bots, machine learning, NLP/voice recognition and personal assistants, the Internet of Things, facial recognition, data mining, and more. How these technologies roll out — and if some of them should be rolling out at all — needs to be discussed and dealt with sooner. This is due to the fact that the pace of change has changed. If you can look at those articles  — with an eye on the last 500-1000 years or so to compare things to — and say that we aren’t living in times where the trajectory of technological change is exponential, then either you or I don’t know the meaning of that word.

 

 

 

 

The ABA and law schools need to be much more responsive and innovative — or society will end up suffering the consequences.

Daniel Christian

 

 

About Law2020: The Podcast
Last month we launched the Law2020 podcast, an audio companion to Law2020, our four-part series of articles about how artificial intelligence and similar emerging technologies are reshaping the practice and profession of law. The podcast episodes and featured guests are as follows:

  1. Access to JusticeDaniel Linna, Professor of Law in Residence and the Director of LegalRnD – The Center for Legal Services Innovation at Michigan State University College of Law.
  2. Legal EthicsMegan Zavieh, ethics and state bar defense lawyer.
  3. Legal ResearchDon MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How To Find Out Anything and The Internet Guide for the Legal Researcher.
  4. Legal AnalyticsAndy Martens, SVP & Global Head Legal Product and Editorial at Thomson Reuters.

The podcasts are short and lively, and we hope you’ll give them a listen. And if you haven’t done so already, we invite you to read the full feature stories over at the Law2020 website. Enjoy!

Listen to Law2020 Podcast

 

50 Twitter accounts lawyers should follow — from postali.com

Excerpt:

Running a successful law practice is about much more than being an excellent attorney. A law firm is a business, and those who stay informed on trends in legal marketing, business development and technology are primed to run their practice more efficiently.

Law firms are a competitive business. In order to stay successful, you need to stay informed. The industry trends can often move at lightning speed, and you want to be ahead of them.

Twitter is a great place for busy attorneys to stay informed. Many thought leaders in the legal industry are eager and willing to share their knowledge in digestible, 280-character tweets that lawyers on-the-go can follow.

We’ve rounded up some of the best Twitter accounts for lawyers (in no particular order.) To save you even more time, we’ve also added all of these account to a Twitter List that you can follow with one click. (You can use some of the time you’ll save to follow Postali on Twitter as well.)

Click here to view the Twitter List of Legal Influencers.

 

 

From DSC:
I find Twitter to be an excellent source of learning, and it is one of the key parts of my own learning ecosystem. I’m not the only one. Check out these areas of Jane Hart’s annual top tools for learning.

Twitter is in the top 10 lists for learning tools no matter whether you are looking at education, workplace learning, and/or for personal and professional learning

 

 

 


Also see/relevant:

  • Prudenti: Law schools facing new demands for innovative education— from libn.com
    Excerpt:
    Law schools have always taught the law and the practice thereof, but in the 21st century that is not nearly enough to provide students with the tools to succeed. Clients, particularly business clients, are not only looking for an “attorney” in the customary sense, but a strategic partner equipped to deal with everything from project management to metrics to process enhancement. Those demands present law schools with both an opportunity for and expectation of innovation in legal education.

 

 

 
© 2024 | Daniel Christian