The future of law and computational technologies: Two sides of the same coin — from law.mit.edu by Daniel Linna
Law and computation are often thought of as being two distinct fields. Increasingly, that is not the case. Dan Linna explores the ways a computational approach could help address some of the biggest challenges facing the legal industry.

Excerpt:

The rapid advancement of artificial intelligence (“AI”) introduces opportunities to improve legal processes and facilitate social progress. At the same time, AI presents an original set of inherent risks and potential harms. From a Law and Computational Technologies perspective, these circumstances can be broadly separated into two categories. First, we can consider the ethics, regulations, and laws that apply to technology. Second, we can consider the use of technology to improve the delivery of legal services, justice systems, and the law itself. Each category presents an unprecedented opportunity to use significant technological advancements to preserve and expand the rule of law.

For basic legal needs, access to legal services might come in the form of smartphones or other devices that are capable of providing users with an inventory of their legal rights and obligations, as well as providing insights and solutions to common legal problems. Better yet, AI and pattern matching technologies can help catalyze the development of proactive approaches to identify potential legal problems and prevent them from arising, or at least mitigate their risk.

We risk squandering abundant opportunities to improve society with computational technologies if we fail to proactively create frameworks to embed ethics, regulation, and law into our processes by design and default.

To move forward, technologists and lawyers must radically expand current notions of interdisciplinary collaboration. Lawyers must learn about technology, and technologists must learn about the law.

 

 

Considering AI in hiring? As its use grows, so do the legal implications for employers. — from forbes.com by Alonzo Martinez; with thanks to Paul Czarapata for his posting on Twitter on this

Excerpt:

As employers grapple with a widespread labor shortage, more are turning to artificial intelligence tools in their search for qualified candidates.

Hiring managers are using increasingly sophisticated AI solutions to streamline large parts of the hiring process. The tools scrape online job boards and evaluate applications to identify the best fits. They can even stage entire online interviews and scan everything from word choice to facial expressions before recommending the most qualified prospects.

But as the use of AI in hiring grows, so do the legal issues surrounding it. Critics are raising alarms that these platforms could lead to discriminatory hiring practices. State and federal lawmakers are passing or debating new laws to regulate them. And that means organizations that implement these AI solutions must not only stay abreast of new laws, but also look at their hiring practices to ensure they don’t run into legal trouble when they deploy them.

 

Amazon’s Ring planned neighborhood “watch lists” built on facial recognition — from theintercept.com by Sam Biddle

Excerpts (emphasis DSC):

Ring, Amazon’s crime-fighting surveillance camera division, has crafted plans to use facial recognition software and its ever-expanding network of home security cameras to create AI-enabled neighborhood “watch lists,” according to internal documents reviewed by The Intercept.

Previous reporting by The Intercept and The Information revealed that Ring has at times struggled to make facial recognition work, instead relying on remote workers from Ring’s Ukraine office to manually “tag” people and objects found in customer video feeds.

Legal scholars have long criticized the use of governmental watch lists in the United States for their potential to ensnare innocent people without due process. “When corporations create them,” said Tajsar, “the dangers are even more stark.” As difficult as it can be to obtain answers on the how and why behind a federal blacklist, American tech firms can work with even greater opacity: “Corporations often operate in an environment free from even the most basic regulation, without any transparency, with little oversight into how their products are built and used, and with no regulated mechanism to correct errors,” Tajsar said.

 

From DSC:
Those working or teaching within the legal realm — this one’s for you. But it’s also for the leadership of the C-Suites in our corporate world — as well as for all of those programmers, freelancers, engineers, and/or other employees working on AI within the corporate world.

By the way, and not to get all political here…but who’s to say what happens with our data when it’s being reviewed in Ukraine…?

 

Also see:

  • Opinion: AI for good is often bad — from wired.com by Mark Latonero
    Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.
 

Going to court without a lawyer is new normal for U.S. litigants — from msn.com by Gwen Everett

Excerpt:

A fair shot at justice is a bedrock value of the American legal system, yet litigants who represent themselves against attorneys are unlikely to win their cases or settle on beneficial terms, according to Bonnie Hough, an attorney at the Judicial Council of California, the rule-making arm of the state’s court system. This reinforces the reality that America is split into two camps — the haves and the have-no-lawyers.

 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

 

 

 
 

Top ten podcasts every teacher needs to hear — from wiley.com; with thanks to Emily Liebtag for her posting on Twitter for this resource

Excerpt:

Listening to podcasts is an easy way to dive into a topic that interests you and learn something new from others who share your passion for education.

We’re highlighting the following ten podcast episodes featuring Jossey-Bass authors that you can listen to whenever, wherever to help you master your craft or reignite your love of teaching.

So, take some time for yourself, grab your earbuds, and press play on these…

 

TECHREPORT 2019: Practice Management — from lawtechnologytoday.org by Alexander Paykin

Excerpt (emphasis DSC):

The American Bar Association’s Legal Technology Resource Center surveyed a sample size of 53,252 attorneys regarding the technology and software both available and utilized in their firms. This TECHREPORT analyzes both the responses of these attorneys on a variety of technological developments and changes occurring in the legal industry and the existing trepidation to adopt certain technologies.

The world continues to shift towards a more technological focus, while the legal industry has not followed suit in many aspects. The use of practice management systems has not seen any real growth throughout the last four years despite high satisfaction ratings. There still remains a need for an all-inclusive practice management system that would not require firms to purchase a variety of different programs for specific tasks, and the switching costs of practice management systems remain a concern for many firms—particularly solo and large firms.

Overall, technology continues to be developed for the legal industry in abundance, however, in many sectors of the industry, various sized firms are hesitant to adopt these advancements, leading to steady or declining growth rates for much of these technologies. The size of the firm also has a large influence on the technologies a firm may adopt, and this makes it hard to predict what technologies may appeal to what firms.

Most law technology is still fairly new, and it has quite far to go before being developed enough to displace traditional ways of accomplishing tasks that many firms value now. There still exists a desire for more and newer technologies that will make this switch easier, and without the feasibility to switch to these software programs more efficiently and effectively, the legal industry will still wait to adapt to the evolving technological world around us.

 

Report: College leaders not confident they can beat new competition — from educationdive.com by Hallie Busta, with thanks to Ray Schroeder for posting this out on LinkedIn

Excerpt:

While they say their institutions are prepared to meet students’ changing needs, they are less confident in their ability to address new forms of competition or change how the public views higher ed.

The report comes as higher ed stares down potentially fewer traditional students, more competition online, less state funding, and concerns over public perceptions of higher ed. These pressures have made smaller public and private institutions vulnerable to consolidation and even closure.

 

FTI 2020 Trend Report for Entertainment, Media, & Technology [FTI]

 

FTI 2020 Trend Report for Entertainment, Media, & Technology — from futuretodayinstitute.com

Our 3rd annual industry report on emerging entertainment, media and technology trends is now available.

  • 157 trends
  • 28 optimistic, pragmatic and catastrophic scenarios
  • 10 non-technical primers and glossaries
  • Overview of what events to anticipate in 2020
  • Actionable insights to use within your organization

KEY TAKEAWAYS

  • Synthetic media offers new opportunities and challenges.
  • Authenticating content is becoming more difficult.
  • Regulation is coming.
  • We’ve entered the post-fixed screen era.
  • Voice Search Optimization (VSO) is the new Search Engine Optimization (SEO).
  • Digital subscription models aren’t working.
  • Advancements in AI will mean greater efficiencies.

 

 

Emerging Tech Trend: Patient-Generated Health Data — from futuretodayinstitute.com — Newsletter Issue 124

Excerpt:

Near-Futures Scenarios (2023 – 2028):

Pragmatic: Big tech continues to develop apps that are either indispensably convenient, irresistibly addictive, or both, and we pay for them, not with cash, but with the data we (sometimes unwittingly) let the apps capture. But for the apps for health care and medical insurance, the stakes could literally be life-and-death. Consumers receive discounted premiums, co-pays, diagnostics and prescription fulfillment, but the data we give up in exchange leaves them more vulnerable to manipulation and invasion of privacy.

Catastrophic: Profit-driven drug makers exploit private health profiles and begin working with the Big Nine. They use data-based targeting to over prescribe patients, netting themselves billions of dollars. Big Pharma target and prey on people’s addictions, mental health predispositions and more, which, while undetectable on an individual level, take a widespread societal toll.

Optimistic: Health data enables prescient preventative care. A.I. discerns patterns within gargantuan data sets that are otherwise virtually undetectable to humans. Accurate predictive algorithms identifies complex combinations of risk factors for cancer or Parkinson’s, offers early screening and testing to high-risk patients and encourages lifestyle shifts or treatments to eliminate or delay the onset of serious diseases. A.I. and health data creates a utopia of public health. We happily relinquish our privacy for a greater societal good.

Watchlist: Amazon; Manulife Financial; GE Healthcare; Meditech; Allscripts; eClinicalWorks; Cerner; Validic; HumanAPI; Vivify; Apple; IBM; Microsoft; Qualcomm; Google; Medicare; Medicaid; national health systems; insurance companies.

 

Drones from CVS and Walgreens are finally here—and they’re bringing Band-Aids — from fastcompany.com by Ruth Reader
With UPS and Google sister company Wing as partners, the big pharmacies are starting to deliver pills, Cheez-Its, and first-aid supplies by drone.

From DSC:
Add those drones to the following amassing armies:

 

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

There are major issues with AI. This article shows how far the legal realm is in wrestling with emerging technologies.

What happens when employers can read your facial expressions? — from nytimes.com by Evan Selinger and Woodrow Hartzog
The benefits do not come close to outweighing the risks.

Excerpts:

The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.

 “Notice and choice” has been an abysmal failure. Social media companies, airlines and retailers overhype the short-term benefits of facial recognition while using unreadable privacy policiesClose X and vague disclaimers that make it hard to understand how the technology endangers users’ privacy and freedom.

 

From DSC:
This article illustrates how far behind the legal realm is in the United States when we look at where our society is at with wrestling with emerging technologies. Dealing with this relatively new *exponential* pace of change is very difficult for many of our institutions to deal with (higher education and the legal realm come to my mind here).

 

 
© 2025 | Daniel Christian