The Future of Lawyers: Legal Tech, AI, Big Data And Online Courts — from forbes.com by Bernard Marr

Excerpts:

In his brand new book Online Courts and the Future of Justice, Richard argues that technology is going to bring about a fascinating decade of change in the legal sector and transform our court system. Although automating our old ways of working plays a part in this, even more, critical is that artificial intelligence and technology will help give more individuals access to justice.

The first generation is the idea that people who use the court system submit evidence and arguments to the judge online or through some form of electronic communication.

The second generation of using technology to transform the legal system would be what Richard calls “outcome thinking” to use technology to help solve disputes without requiring lawyers or the traditional court system.

Some of the biggest obstacles to an online court system are the political will to bring about such a transformation, the support of judges and lawyers, funding, as well as the method we’d apply. For example, decisions will need to be made whether the online system would be used for only certain cases or situations.

Ultimately, we have a grave access-to-justice problem. Technology can help improve our outcomes and give people a way to resolve public disputes in ways that previously weren’t possible. While this transformation might not solve all the struggles with the legal system or the access-to-justice issue, it can offer a dramatic improvement.

 

FuturePod gathers voices from the international field of Futures and Foresight. Through a series of interviews, the founders of the field and emerging leaders share their stories, tools and experiences.

FuturePod gathers voices from the international field of Futures and Foresight. Through a series of podcast interviews, the founders of the field and emerging leaders share their stories, tools and experiences. The Futures and Foresight community comprises a remarkable and diverse group of individuals who span, academic, commercial and social interests.

 

The Research is in: 2019 Education Research Highlights — edutopia.org by Youki Terada
Does doodling boost learning? Do attendance awards work? Do boys and girls process math the same way? Here’s a look at the big questions that researchers tackled this year.

Excerpt:

Every year brings new insights—and cautionary tales—about what works in education. 2019 is no different, as we learned that doodling may do more harm than good when it comes to remembering information. Attendance awards don’t work and can actually increase absences. And while we’ve known that school discipline tends to disproportionately harm students of color, a new study reveals a key reason why: Compared with their peers, black students tend to receive fewer warnings for misbehavior before being punished.

CUT THE ARTS AT YOUR OWN RISK, RESEARCHERS WARN
As arts programs continue to face the budget ax, a handful of new studies suggest that’s a grave mistake. The arts provide cognitive, academic, behavioral, and social benefits that go far beyond simply learning how to play music or perform scenes in a play.

In a major new study from Rice University involving 10,000 students in third through eighth grades, researchers determined that expanding a school’s arts programs improved writing scores, increased the students’ compassion for others, and reduced disciplinary infractions. The benefits of such programs may be especially pronounced for students who come from low-income families, according to a 10-year study of 30,000 students released in 2019.

Unexpectedly, another recent study found that artistic commitment—think of a budding violinist or passionate young thespian—can boost executive function skills like focus and working memory, linking the arts to a set of overlooked skills that are highly correlated to success in both academics and life.

Failing to identify and support students with learning disabilities early can have dire, long-term consequences. In a comprehensive 2019 analysis, researchers highlighted the need to provide interventions that align with critical phases of early brain development. In one startling example, reading interventions for children with learning disabilities were found to be twice as effective if delivered by the second grade instead of third grade.

 

Greta Thunberg is the youngest TIME Person of the Year ever. Here’s how she made history — from time.com

Excerpt:

The politics of climate action are as entrenched and complex as the phenomenon itself, and Thunberg has no magic solution. But she has succeeded in creating a global attitudinal shift, transforming millions of vague, middle-of-the-night anxieties into a worldwide movement calling for urgent change. She has offered a moral clarion call to those who are willing to act, and hurled shame on those who are not. She has persuaded leaders, from mayors to Presidents, to make commitments where they had previously fumbled: after she spoke to Parliament and demonstrated with the British environmental group Extinction Rebellion, the U.K. passed a law requiring that the country eliminate its carbon footprint. She has focused the world’s attention on environmental injustices that young indigenous activists have been protesting for years. Because of her, hundreds of thousands of teenage “Gretas,” from Lebanon to Liberia, have skipped school to lead their peers in climate strikes around the world.

 

Young people! You CAN and will make a big impact/difference!

 

Accessibility at a Crossroads: Balancing Legal Requirements, Frivolous Lawsuits, and Legitimate Needs — from er.educause.edu by Martin LaGrow

Excerpt:

Changes in legal requirements for IT accessibility have prompted some to pursue self-serving legal actions. To increase access to users of all abilities, colleges and universities should articulate their commitment to accessibility and focus on changing institutional culture.

 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

FTI 2020 Trend Report for Entertainment, Media, & Technology [FTI]

 

FTI 2020 Trend Report for Entertainment, Media, & Technology — from futuretodayinstitute.com

Our 3rd annual industry report on emerging entertainment, media and technology trends is now available.

  • 157 trends
  • 28 optimistic, pragmatic and catastrophic scenarios
  • 10 non-technical primers and glossaries
  • Overview of what events to anticipate in 2020
  • Actionable insights to use within your organization

KEY TAKEAWAYS

  • Synthetic media offers new opportunities and challenges.
  • Authenticating content is becoming more difficult.
  • Regulation is coming.
  • We’ve entered the post-fixed screen era.
  • Voice Search Optimization (VSO) is the new Search Engine Optimization (SEO).
  • Digital subscription models aren’t working.
  • Advancements in AI will mean greater efficiencies.

 

 

Deepfakes: When a picture is worth nothing at all — from law.com by Katherine Forrest

Excerpt:

“Deepfakes” is the name for highly realistic, falsified imagery and sound recordings; they are digitized and personalized impersonations. Deepfakes are made by using AI-based facial and audio recognition and reconstruction technology; AI algorithms are used to predict facial movements as well as vocal sounds. In her Artificial Intelligence column, Katherine B. Forrest explores the legal issues likely to arise as deepfakes become more prevalent.

 

Drones from CVS and Walgreens are finally here—and they’re bringing Band-Aids — from fastcompany.com by Ruth Reader
With UPS and Google sister company Wing as partners, the big pharmacies are starting to deliver pills, Cheez-Its, and first-aid supplies by drone.

From DSC:
Add those drones to the following amassing armies:

 

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

There are major issues with AI. This article shows how far the legal realm is in wrestling with emerging technologies.

What happens when employers can read your facial expressions? — from nytimes.com by Evan Selinger and Woodrow Hartzog
The benefits do not come close to outweighing the risks.

Excerpts:

The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.

 “Notice and choice” has been an abysmal failure. Social media companies, airlines and retailers overhype the short-term benefits of facial recognition while using unreadable privacy policiesClose X and vague disclaimers that make it hard to understand how the technology endangers users’ privacy and freedom.

 

From DSC:
This article illustrates how far behind the legal realm is in the United States when we look at where our society is at with wrestling with emerging technologies. Dealing with this relatively new *exponential* pace of change is very difficult for many of our institutions to deal with (higher education and the legal realm come to my mind here).

 

 

YouTube’s algorithm hacked a human vulnerability, setting a dangerous precedent — from which-50.com by Andrew Birmingham

Excerpt (emphasis DSC):

Even as YouTube’s recommendation algorithm was rolled out with great fanfare, the fuse was already burning. A project of The Google Brain and designed to optimise engagement, it did something unforeseen — and potentially dangerous.

Today, we are all living with the consequences.

As Zeynep Tufekci, an associate professor at the University of North Carolina, explained to attendees of Hitachi Vantara’s Next 2019 conference in Las Vegas this week, “What the developers did not understand at the time is that YouTube’ algorithm had discovered a human vulnerability. And it was using this [vulnerability] at scale to increase YouTube’s engagement time — without a single engineer thinking, ‘is this what we should be doing?’”

 

The consequence of the vulnerability — a natural human tendency to engage with edgier ideas — led to YouTube’s users being exposed to increasingly extreme content, irrespective of their preferred areas of interest.

“What they had done was use machine learning to increase watch time. But what the machine learning system had done was to discover a human vulnerability. And that human vulnerability is that things that are slightly edgier are more attractive and more interesting.”

 

From DSC:
Just because we can…

 

 

Three threats posed by deepfakes that technology won’t solve — from technologyreview.com by Angela Chen
As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.

Excerpt:

3) Problem: Deepfake detection is too late to help victims
With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

 

The blinding of justice: Technology, journalism and the law — from thehill.com by Kristian Hammond and Daniel Rodriguez

Excerpts:

The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.

If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.

We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.

Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.

We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.

 

From DSC:
This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.

 

UPS just beat out Amazon and Google to become America’s first nationwide drone airline — from businessinsider.com by Rachel Premack

Key points:

  • The US Department of Transportation said Tuesday it granted its first full Part 135 certification for a drone airline to UPS.
  • UPS currently conducts drone deliveries at a large hospital in Raleigh, North Carolina.
  • It will now be able to operate drones anywhere in the country — an industry first.
  • Another drone operator — Wing, owned by Google’s parent company Alphabet — also has Part 135 certification. But the scope of its operation is limited to Christiansburg, Virginia, about 210 miles southwest of the state capitol Richmond.

From DSC:
Add to that, these delivery bots, drones, pods, and more:

 

From DSC:
I wonder…will we be able to take a quiet walk in the future? That may not be the case if the building of these armies of drones continues — and becomes a full-fledged trend.

 
© 2024 | Daniel Christian