10 jobs that are safe in an AI world — from linkedin.com by Kai-Fu Lee

Excerpts:

Teaching
AI will be a great tool for teachers and educational institutions, as it will help educators figure out how to personalize curriculum based on each student’s competence, progress, aptitude, and temperament. However, teaching will still need to be oriented around helping students figure out their interests, teaching students to learn independently, and providing one-on-one mentorship. These are tasks that can only be done by a human teacher. As such, there will still be a great need for human educators in the future.

Criminal defense law
Top lawyers will have nothing to worry about when it comes to job displacement. reasoning across domains, winning the trust of clients, applying years of experience in the courtroom, and having the ability to persuade a jury are all examples of the cognitive complexities, strategies, and modes of human interaction that are beyond the capabilities of AI. However, a lot of paralegal and preparatory work like document review, analysis, creating contracts, handling small cases, packing cases, and coming up with recommendations can be done much better and more efficiently with AI. The costs of law make it worthwhile for AI companies to go after AI paralegals and AI junior lawyers, but not top lawyers.

 

From DSC:
In terms of teaching, I agree that while #AI will help personalize learning, there will still be a great need for human teachers, professors, and trainers. I also agree w/ my boss (and with some of the author’s viewpoints here, but not all) that many kinds of legal work will still need the human touch & thought processes. I diverge from his thinking in terms of scope — the need for human lawyers will go far beyond just lawyers involved in crim law.

 

Also see:

15 business applications for artificial intelligence and machine learning — from forbes.com

Excerpt:

Fifteen members of Forbes Technology Council discuss some of the latest applications they’ve found for AI/ML at their companies. Here’s what they had to say…

 

 

 

ABA doubles online learning hours — page 6 of 40 from the National Jurist

Excerpt:

Law schools will have the option of offering up to 30 hours of online learning in their J.D. programs, doubling the previous limit of 15, under a new measure approved by the ABA in August. Also, for the first time, first year students will be permitted to take online classes — as many as 10 hours. The ABA has allowed a small number of variances in the past for schools to offer more robust online offerings. Mitchell Hamline School of Law in St. Paul, Minn., launched the nation’s first hybrid J.D. program in 2015. Southwestern Law School in Los Angeles and Syracuse University College of Law in upstate New York have received ABA approval for similar programs.

Such programs have been applauded because they allow for more non-traditional students to get law school educations. Students are only required to be on campus for short periods of time. That means they don’t have to live near the law school.

 

From DSC:
In 1998 Davenport University Online began offering 100% online-based courses. I joined them 3 years later, and I was part of a group of people who took DUO to the place where — when I left DUO in March 2007, we were offering ~50% of the total credit hours of the university in a 100% online-based format. Again, that was 15-20 years ago. DUO was joined by many other online-based programs from other community colleges and universities.

So what gives w/ the legal education area? 

Well…the American Bar Association (ABA) comes to mind.

The ABA has been very restrictive on the use of online learning. Mounting pressure surely must be on the ABA to allow all kinds of variances in this area. Given the need for legal education to better deal with the exponential pace of technological innovation, the ABA has a responsibility to society to up their game and become far more responsive to the needs of law students. 

 

 

Evaluating the impact of artificial intelligence on human rights — from today.law.harvard.edu by Carolyn Schmitt
Report from Berkman Klein Center for Internet & Society provides new foundational framework for considering risks and benefits of AI on human rights

Excerpt:

From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?

A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.

“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.

 

 

Also see:

  • Morality in the Machines — from today.law.harvard.edu by Erick Trickey
    Researchers at Harvard’s Berkman Klein Center for Internet & Society are collaborating with MIT scholars to study driverless cars, social media feeds, and criminal justice algorithms, to make sure openness and ethics inform artificial intelligence.

 

 

 

How AI could help solve some of society’s toughest problems — from technologyreview.com by Charlotte Jee
Machine learning and game theory help Carnegie Mellon assistant professor Fei Fang predict attacks and protect people.

Excerpt:

Fei Fang has saved lives. But she isn’t a lifeguard, medical doctor, or superhero. She’s an assistant professor at Carnegie Mellon University, specializing in artificial intelligence for societal challenges.

At MIT Technology Review’s EmTech conference on Wednesday, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.

 

 

How AI can be a force for good — from science.sciencemag.org by Mariarosaria Taddeo & Luciano Floridi

Excerpts:

Invisibility and Influence
AI supports services, platforms, and devices that are ubiquitous and used on a daily basis. In 2017, the International Federation of Robotics suggested that by 2020, more than 1.7 million new AI-powered robots will be installed in factories worldwide. In the same year, the company Juniper Networks issued a report estimating that, by 2022, 55% of households worldwide will have a voice assistant, like Amazon Alexa.

As it matures and disseminates, AI blends into our lives, experiences, and environments and becomes an invisible facilitator that mediates our interactions in a convenient, barely noticeable way. While creating new opportunities, this invisible integration of AI into our environments poses further ethical issues. Some are domain-dependent. For example, trust and transparency are crucial when embedding AI solutions in homes, schools, or hospitals, whereas equality, fairness, and the protection of creativity and rights of employees are essential in the integration of AI in the workplace. But the integration of AI also poses another fundamental risk: the erosion of human self-determination due to the invisibility and influencing power of AI.

To deal with the risks posed by AI, it is imperative to identify the right set of fundamental ethical principles to inform the design, regulation, and use of AI and leverage it to benefit as well as respect individuals and societies. It is not an easy task, as ethical principles may vary depending on cultural contexts and the domain of analysis. This is a problem that the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems tackles with the aim of advancing public debate on the values and principles that should underpin ethical uses of AI.

 

 

Who’s to blame when a machine botches your surgery? — from qz.com by Robert Hart

Excerpt:

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.

 

 

Google Cloud’s new AI chief is on a task force for AI military uses and believes we could monitor ‘pretty much the whole world’ with drones — from businessinsider.in by Greg Sandoval

Excerpt:

“We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds,” Moore said. “I’m not saying we’d want to do that, but there’s not a technology gap there where I think it’s actually too difficult to do. This is now practical.”

Google’s decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.

“It’s worrisome to note after the widespread internal dissent against Maven that Google would hire Andrew Moore,” said one former Google employee. “Googlers want less alignment with the military-industrial complex, not more. This hire is like a punch in the face to the over 4,000 Googlers who signed the Cancel Maven letter.”

 

 

Organizations Are Gearing Up for More Ethical and Responsible Use of Artificial Intelligence, Finds Study — from businesswire.com
Ninety-two percent of AI leaders train their technologists in ethics; 74 percent evaluate AI outcomes weekly, says report from SAS, Accenture Applied Intelligence, Intel, and Forbes Insights

Excerpt:

AI oversight is not optional

Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognize that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 percent). Additionally, 43 percent of AI leaders shared that their organization has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).

 

 

 

Do robots have rights? Here’s what 10 people and 1 robot have to say — from createdigital.org.au
When it comes to the future of technology, nothing is straightforward, and that includes the array of ethical issues that engineers encounter through their work with robots and AI.

 

 

 

Prudenti: Law schools facing new demands for innovative education — from libn.com by A. Gail Prudenti

Excerpt (emphasis DSC):

Law schools have always taught the law and the practice thereof, but in the 21st century that is not nearly enough to provide students with the tools to succeed.

Clients, particularly business clients, are not only looking for an “attorney” in the customary sense, but a strategic partner equipped to deal with everything from project management to metrics to process enhancement. Those demands present law schools with both an opportunity for and expectation of innovation in legal education.

At Hofstra Law, we are in the process of establishing a new Center for Applied Legal Technology and Innovation where law students will be taught to use current and emerging technology, and to apply those skills and expertise to provide cutting-edge legal services while taking advantage of interdisciplinary opportunities.

Our goal is to teach law students how to use technology to deliver legal services and to yield graduates who combine exceptional legal acumen with the skill and ability to travel comfortably among myriad disciplines. The lawyers of today—and tomorrow—must be more than just conversant with other professionals. Rather, they need to be able to collaborate with experts in other fields to serve the myriad and intertwined interests of the client.

 

 

Also see:

Workforce of the future: The competing forces shaping 2030 — from pwc.com

Excerpt (emphasis DSC):

We are living through a fundamental transformation in the way we work. Automation and ‘thinking machines’ are replacing human tasks and jobs, and changing the skills that organisations are looking for in their people. These momentous changes raise huge organisational, talent and HR challenges – at a time when business leaders are already wrestling with unprecedented risks, disruption and political and societal upheaval.

The pace of change is accelerating.

 


Graphic by DSC

 

Competition for the right talent is fierce. And ‘talent’ no longer means the same as ten years ago; many of the roles, skills and job titles of tomorrow are unknown to us today. How can organisations prepare for a future that few of us can define? How will your talent needs change? How can you attract, keep and motivate the people you need? And what does all this mean for HR?

This isn’t a time to sit back and wait for events to unfold. To be prepared for the future you have to understand it. In this report we look in detail at how the workplace might be shaped over the coming decade.

 

 

 

From DSC:

Peruse the titles of the articles in this document (that features articles from the last 1-2 years) with an eye on the topics and technologies addressed therein! 

 

Artificial Intelligence (AI), virtual reality, augmented reality, robotics, drones, automation, bots, machine learning, NLP/voice recognition and personal assistants, the Internet of Things, facial recognition, data mining, and more. How these technologies roll out — and if some of them should be rolling out at all — needs to be discussed and dealt with sooner. This is due to the fact that the pace of change has changed. If you can look at those articles  — with an eye on the last 500-1000 years or so to compare things to — and say that we aren’t living in times where the trajectory of technological change is exponential, then either you or I don’t know the meaning of that word.

 

 

 

 

The ABA and law schools need to be much more responsive and innovative — or society will end up suffering the consequences.

Daniel Christian

 

 

About Law2020: The Podcast
Last month we launched the Law2020 podcast, an audio companion to Law2020, our four-part series of articles about how artificial intelligence and similar emerging technologies are reshaping the practice and profession of law. The podcast episodes and featured guests are as follows:

  1. Access to JusticeDaniel Linna, Professor of Law in Residence and the Director of LegalRnD – The Center for Legal Services Innovation at Michigan State University College of Law.
  2. Legal EthicsMegan Zavieh, ethics and state bar defense lawyer.
  3. Legal ResearchDon MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How To Find Out Anything and The Internet Guide for the Legal Researcher.
  4. Legal AnalyticsAndy Martens, SVP & Global Head Legal Product and Editorial at Thomson Reuters.

The podcasts are short and lively, and we hope you’ll give them a listen. And if you haven’t done so already, we invite you to read the full feature stories over at the Law2020 website. Enjoy!

Listen to Law2020 Podcast

 

50 Twitter accounts lawyers should follow — from postali.com

Excerpt:

Running a successful law practice is about much more than being an excellent attorney. A law firm is a business, and those who stay informed on trends in legal marketing, business development and technology are primed to run their practice more efficiently.

Law firms are a competitive business. In order to stay successful, you need to stay informed. The industry trends can often move at lightning speed, and you want to be ahead of them.

Twitter is a great place for busy attorneys to stay informed. Many thought leaders in the legal industry are eager and willing to share their knowledge in digestible, 280-character tweets that lawyers on-the-go can follow.

We’ve rounded up some of the best Twitter accounts for lawyers (in no particular order.) To save you even more time, we’ve also added all of these account to a Twitter List that you can follow with one click. (You can use some of the time you’ll save to follow Postali on Twitter as well.)

Click here to view the Twitter List of Legal Influencers.

 

 

From DSC:
I find Twitter to be an excellent source of learning, and it is one of the key parts of my own learning ecosystem. I’m not the only one. Check out these areas of Jane Hart’s annual top tools for learning.

Twitter is in the top 10 lists for learning tools no matter whether you are looking at education, workplace learning, and/or for personal and professional learning

 

 

 


Also see/relevant:

  • Prudenti: Law schools facing new demands for innovative education— from libn.com
    Excerpt:
    Law schools have always taught the law and the practice thereof, but in the 21st century that is not nearly enough to provide students with the tools to succeed. Clients, particularly business clients, are not only looking for an “attorney” in the customary sense, but a strategic partner equipped to deal with everything from project management to metrics to process enhancement. Those demands present law schools with both an opportunity for and expectation of innovation in legal education.

 

 

 

Report: Accessibility in Digital Learning Increasingly Complex — from campustechnology.com by Dian Schaffhauser

Excerpt:

The Online Learning Consortium (OLC)has introduced a series of original reports to keep people in education up-to-date on the latest developments in the field of digital learning. The first report covers accessibility and addresses both K-12 and higher education. The series is being produced by OLC’s Research Center for Digital Learning & Leadership.

The initial report addresses four broad areas tied to accessibility:

  • The national laws governing disability and access and how they apply to online courses;
  • What legal cases exist to guide online course design and delivery in various educational settings;
  • The issues that emerge regarding online course access that might be unique to higher ed or to K-12, and which ones might be shared; and
  • What support online course designers need to generate accessible courses for learners across the education life span (from K-12 to higher education).

 

 

How artificial intelligence is transforming legal research — from abovethelaw.com by David Lat

Excerpt:

Technology and innovation are transforming the legal profession in manifold ways. According to Professor Richard Susskind, author of The Future of Law, “Looking 30 years ahead, I think it unimaginable that our legal systems will not undergo vast change.” Indeed, this revolution is already underway – and to serve their clients effectively and ethically, law firms must adapt to these changing realities.

One thing that remains unchanged, however, is the importance of legal research. In the words of Don MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How to Find Out Anything and The Internet Guide for the Legal Researcher:

As lawyers, you need to be on top of the current legal landscape. Legal research will allow you to advise your client on the standards of the law at this moment, whether they come from case law, statutes, or regulations.

The importance of legal research persists, but how it’s conducted is constantly advancing and evolving. Just as attorneys who used hard-copy books for all of their legal research would be amazed by online legal research services like Westlaw, attorneys using current services will be amazed by the research tools of tomorrow, powered by artificial intelligence and analytics.

 

 

 

 

Khan Academy’s Free LSAT Prep Program Draws Jeers, Cheers — from law.com by Karen Sloan
The highly anticipated program, created in conjunction with the organization that makes the law school entrance exam, is set to go live June 1.

Excerpt:

A major player in free online education is poised to release its Law School Admission Test prep program, but the traditional LSAT prep industry says it isn’t sweating the new competition. At least not yet.

The council announced its collaboration with Khan Academy in March 2017, citing a desire to help aspiring law students for whom private test prep is financially out of reach. Private LSAT prep services cost anywhere from $200 or more for instructional videos to $1,500 and upward for in-person classes.

 

ABA set to approve more online credits for law students — from law.com by Karen Sloan
Supporters say allowing J.D. students to take up to one-third of their credits online, including some during their first year, is validation that distance education can work in law schools.

 

7 things lawyers should know about Artificial Intelligence — from abovethelaw.com by Amy Larson
AI is here to make practicing law easier, so keep these things in mind if you’re thinking of implementing it in your practice. 

Excerpt:

6. Adopting AI means embracing change.
If you intend to implement AI technologies into your legal organization, you must be ready for change. Not only will your processes and workflows need to change to incorporate AI into the business, but you’ll also likely be working with a whole new set of people. Whether they are part of your firm or outside consultants, expect to collaborate with data analysts, process engineers, pricing specialists, and other data-driven professionals.

 

 

 


Addendum on 5/18/18:


 

  • Technology & Innovation: Trends Transforming The Legal Industry — from livelaw.in by Richa Kachhwaha
    Excerpt:
    Globally, the legal industry is experiencing an era of transformation. The changes are unmistakable and diverse. Paperwork and data management- long practiced by lawyers- is being replaced by software solutions; trans-national boundaries are legally shrinking; economic forces are re-defining law practices; innovative in-house law departments are driving significant value creation; consumer trends have begun to dominate the legal landscape; …

 

 

 

The Law Firm Disrupted: A Kirkland & Ellis Law School? Crystal Ball Gazing on the Future of Legal Ed — from by Roy Strom
What blue-sky thinking about the future of legal education might tell us about the relationship between Big Law and legal institutions of higher learning.

Excerpt:

The speech, which is worth watching here at the 4 hour and 41 minute mark, was a clear-eyed look at both the current state and the future possibilities of “innovation” in the legal education market. Rodriguez comes to the conclusion that law schools that have focused on tech training or other skills aimed at changing legal services delivery have yet to “move the needle” on demand for their students, rankings for their own schools or their own economic predicament.

That is in large part due to at least 10 “conditions” that currently exist and are limiting legal education innovation. I won’t list them all here, but they include the formal structure of offering only JD and LLM degrees; a university schedule that was created more than 100 years ago; and, of course, accreditation and credentialing requirements.

One solution offered by Rodriguez: “Blue-sky thinking.”

 

 

 

Welcome to Law2020: Artificial Intelligence and the Legal Profession — from abovethelaw.com by David Lat and Brian Dalton
What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world?

Excerpt:

Artificial intelligence has been declared “[t]he most important general-purpose technology of our era.” It should come as no surprise to learn that AI is transforming the legal profession, just as it is changing so many other fields of endeavor.

What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world? Will AI automate the work of attorneys — or will it instead augment, helping lawyers to work more efficiently, effectively, and ethically?

 

 

 

 

How artificial intelligence is transforming the world — from brookings.edu by Darrell M. West and John R. Allen

Summary

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents

I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion


In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

 

 

Seven Artificial Intelligence Advances Expected This Year  — from forbes.com

Excerpt:

Artificial intelligence (AI) has had a variety of targeted uses in the past several years, including self-driving cars. Recently, California changed the law that required driverless cars to have a safety driver. Now that AI is getting better and able to work more independently, what’s next?

 

 

Google Cofounder Sergey Brin Warns of AI’s Dark Side — from wired.com by Tom Simonite

Excerpt (emphasis DSC):

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

 

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

 

 

 

 

 

 

Demystifying Artificial Intelligence (AI) — from legalsolutions.thomsonreuters.com
A legal professional’s 7-step guide through the noise

Excerpt:

AI IS NOT ONE THING
AI is not a single technology. Really, it’s a number of different technologies applied in different functions through various applications.

Some examples include:
Natural language processing (NLP), which is behind many AI applications in the legal industry whose work product is, as we know, text-heavy by nature. NLP is used to translate plain-English search terms into legal searches on research platforms such as Thomson Reuters Westlaw, and also to analyze language in documents to make sense of them for ediscovery or due diligence reviews.

Logical AI/inferencing is employed to build decision trees in systems such as TurboTax®. This guides users through questionnaires resulting in legal answers or drafts of legal documents. Human expertise is built into the logical structure of these systems.

This only scratches the surface of the capabilities of AI. All of the functions and technologies identified below are starting to be used in the legal space, sometimes in combination with one another.

Technologies

  • Logical AI/Inferencing
  • Machine Learning
  • Natural Language Processing (NLP)
  • Robotics
  • Speech
  • Vision

Functions

  • Expertise Automation
  • Image Recognition & Classification
  • Question Answering
  • Robotics
  • Speech (Speech to Text, Text to Speech)
  • Text Analytics (Extraction, Classification)
  • Text Generation
  • Translation

 

 

The meaning of artificial intelligence for legal researchers — from legalsolutions.thomsonreuters.com

Excerpt:

Many legal professionals currently use artificial intelligence (AI) in their work, although they may not always realize it. Even among the most tech-savvy attorneys, questions remain as to what AI means for the legal profession today – and in the future.

Three of the most common questions include:

  • What is the definition of AI and how does it differ from other types of technology?
  • How will advances in AI change the way legal professionals work in the future?

And, perhaps most importantly:

  • How do you know when AI technology can be trusted in the legal space?

In this post, Thomson Reuters Westlaw shares answers to these questions based on the perspectives of our experienced attorney-editors and technology experts.

 

 


While the following isn’t necessarily related to AI, it is related to legal education and may be helpful to those who will be trying to pass the Bar Exam:


 

You Can Beat The Bar Exam. Here’s How. — from nationaljurist.com by Maggy Mahalick

Excerpt:

Use All Your Resources
You are not the first person to study for this exam. You are not in this alone. You don’t have to reinvent the wheel for everything. Save yourself time and energy by using resources that are already out there for you.

The National Conference of Bar Examiners (“NCBE”) has countless free and paid resources on their website alone. They provide a sample of past Multistate Essay Exam (“MEE”) questions, along with the analyses of the correct answers. They also provide a limited number of sample multiple-choice Multistate Bar Exam (“MBE”) questions with the correct answer choices for free. They provide more questions with answer explanations for a fee.

You can also sign up for a bar prep program that uses past retired questions from previous bar exams. The NCBE licenses these questions out to some companies to use instead of simulated questions. Real questions will give you an idea of what the exam will look and feel like.

If you like using flashcards but don’t have the time or patience to make your own, there are several websites that provide online flashcards for you as well as websites that allow you to make your own deck online. For example, AdaptiBar has a set of online flashcards that you can add your own notes to.

 

 

 

 

Europe divided over robot ‘personhood’ — from politico.eu by Janosch Delcker

Excerpt:

BERLIN — Think lawsuits involving humans are tricky? Try taking an intelligent robot to court.

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

 

 
© 2025 | Daniel Christian