The blinding of justice: Technology, journalism and the law — from thehill.com by Kristian Hammond and Daniel Rodriguez

Excerpts:

The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.

If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.

We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.

Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.

We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.

 

From DSC:
This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

Walgreens to test drone delivery service with Alphabet’s Wing — from cnbc.com by Jasmine Wu

Key Points:

  • Walgreens is working with Alphabet’s drone delivery service Wing to test a new service.
  • The pilot program will deliver food and beverage, over-the-counter medications and other items, but not prescriptions.
  • Amazon said in June its new delivery drone should be ready “within months” to deliver packages to customers.

 

Add that to these other robots, drones, driverless pods, etc.:

 

From DSC:
Is a wild, wild west developing? It appears so. What does the average citizen do in these cases if they don’t want such drones constantly flying over their heads, neighborhoods, schools, etc.?

I wonder what the average age is of people working on these projects…?

Just because we can…

 

Someone is always listening — from Future Today Institute

Excerpt:

Very Near-Futures Scenarios (2020 – 2022):

  • OptimisticBig tech and consumer device industries agree to a single set of standards to inform people when they are being listened to. Devices now emit an audible ping and/ or a visible light anytime they are actively recording sound. While they need to store data in order to improve natural language understanding and other important AI systems, consumers now have access to a portal and can see, listen to, and erase their data at any time. In addition, consumers can choose to opt-out of storing their data to help improve AI systems.
  • Pragmatic: Big tech and consumer device industries preserve the status quo, which leads to more cases of machine eavesdropping and erodes public trust. Federal agencies open investigations into eavesdropping practices, which leads to a drop in share prices and a concern that more advanced biometric technologies could face debilitating regulation.
  • CatastrophicBig tech and consumer device industries collect and store our conversations surreptitiously while developing new ways to monetize that data. They anonymize and sell it to developers wanting to create their own voice apps or to research institutions wanting to do studies using real-world conversation. Some platforms develop lucrative fee structures allowing others access to our voice data: business intelligence firms, market research agencies, polling agencies, political parties and individual law enforcement organizations. Consumers have little to no ability to see and understand how their voice data are being used and by whom. Opting out of collection systems is intentionally opaque. Trust erodes. Civil unrest grows.

Action Meter:

 

Watchlist:

  • Google; Apple; Amazon; Microsoft; Salesforce; BioCatch; CrossMatch; ThreatMetrix; Electronic Frontier Foundation; World Privacy Forum; American Civil Liberties Union; IBM; Baidu; Tencent; Alibaba; Facebook; Electronic Frontier Foundation; European Union; government agencies worldwide.

 

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Artificial Intelligence in Higher Education: Applications, Promise and Perils, and Ethical Questions — from er.educause.edu by Elana Zeide
What are the benefits and challenges of using artificial intelligence to promote student success, improve retention, streamline enrollment, and better manage resources in higher education?

Excerpt:

The promise of AI applications lies partly in their efficiency and partly in their efficacy. AI systems can capture a much wider array of data, at more granularity, than can humans. And these systems can do so in real time. They can also analyze many, many students—whether those students are in a classroom or in a student body or in a pool of applicants. In addition, AI systems offer excellent observations and inferences very quickly and at minimal cost. These efficiencies will lead, we hope, to increased efficacy—to more effective teaching, learning, institutional decisions, and guidance. So this is one promise of AI: that it will show us things we can’t assess or even envision given the limitations of human cognition and the difficulty of dealing with many different variables and a wide array of students.

A second peril in the use of artificial intelligence in higher education consists of the various legal considerations, mostly involving different bodies of privacy and data-protection law. Federal student-privacy legislation is focused on ensuring that institutions (1) get consent to disclose personally identifiable information and (2) give students the ability to access their information and challenge what they think is incorrect.7 The first is not much of an issue if institutions are not sharing the information with outside parties or if they are sharing through the Family Educational Rights and Privacy Act (FERPA), which means an institution does not have to get explicit consent from students. The second requirement—providing students with access to the information that is being used about them—is going to be an increasingly interesting issue.8 I believe that as the decisions being made by artificial intelligence become much more significant and as students become more aware of what is happening, colleges and universities will be pressured to show students this information. People are starting to want to know how algorithmic and AI decisions are impacting their lives.

My short advice about legal considerations? Talk to your lawyers. The circumstances vary considerably from institution to institution.

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft — from washingtonpost.com by Drew Harwell

Excerpt:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

 

From DSC:
Needless to say, this is very scary stuff here! Now what…? Who in our society should get involved to thwart this kind of thing?

  • Programmers?
  • Digital audio specialists?
  • Legislators?
  • Lawyers?
  • The FBI?
  • Police?
  • Other?


Addendum on 9/12/19:

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

Autonomous robot deliveries are coming to 100 university campuses in the U.S. — from digitaltrends.com by Luke Dormehl

Excerpt:

Pioneering autonomous delivery robot company Starship Technologies is coming to a whole lot more university campuses around the U.S. The robotics startup announced that it will expand its delivery services to 100 university campuses in the next 24 months, building on its successful fleets at George Mason University and Northern Arizona University.

 

Postmates Gets Go-Ahead to Test Delivery Robot in San Francisco — from interestingengineering.com by Donna Fuscaldo
Postmates was granted permission to test a delivery robot in San Francisco.

 

And add those to ones formerly posted on Learning Ecosystems:

 

From DSC:
I’m grateful for John Muir and for the presidents of the United States who had the vision to set aside land for the national park system. Such parks are precious and provide much needed respite from the hectic pace of everyday life.

Closer to home, I’m grateful for what my parents’ vision was for a place to help bring the families together through the years. A place that’s peaceful, quiet, surrounded by nature and community.

So I wonder what kind of legacy the current generations are beginning to create? That is…do we really want to be known as the generations who created the unchecked chaotic armies of delivery drones, delivery robots, driverless pods, etc. to fill the skies, streets, sidewalks, and more? 

I don’t. That’s not a gift to our kids or grandkids…not at all.

 

 

AI is in danger of becoming too male — new research — from singularityhub.com by Juan Mateos-Garcia and Joysy John

Excerpts (emphasis DSC):

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

One way to minimize AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year showed that less than 20 percent of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.

 


From DSC:
My niece just left a very lucrative programming job and managerial role at Microsoft after working there for several years. As a single woman, she got tired of fighting the culture there. 

It was again a reminder to me that there are significant ramifications to the cultures of the big tech companies…especially given the power of these emerging technologies and the growing influence they are having on our culture.


Addendum on 8/20/19:

  • Google’s Hate Speech Detection A.I. Has a Racial Bias Problem — from fortunes.com by Jonathan Vanian
    Excerpt:
    A Google-created tool that uses artificial intelligence to police hate speech in online comments on sites like the New York Times has become racially biased, according to a new study. The tool, developed by Google and a subsidiary of its parent company, often classified comments written in the African-American vernacular as toxic, researchers from the University of Washington, Carnegie Mellon, and the Allen Institute for Artificial Intelligence said in a paper presented in early August at the Association for Computational Linguistics conference in Florence, Italy.
    .
  • On the positive side of things:
    Number of Female Students, Students of Color Tackling Computer Science AP on the Rise — from thejournal.com
 

A handful of US cities have banned government use of facial recognition technology due to concerns over its accuracy and privacy. WIRED’s Tom Simonite talks with computer vision scientist and lawyer Gretchen Greene about the controversy surrounding the use of this technology.

 

 

Report: Smart-city IoT isn’t smart enough yet — from networkworld.com by Jon Gold
A report from Forrester Research details vulnerabilities affecting smart-city internet of things (IoT) infrastructure and offers some methods of mitigation.

 

Governments take first, tentative steps at regulating AI — from heraldnet.com by James McCusker
Can we control artificial intelligence’s potential for disrupting markets? Time will tell.

Excerpt:

State legislatures in New York and New Jersey have proposed legislation that represents the first, tentative steps at regulation. While the two proposed laws are different, they both have elements of information gathering about the risks to such things as privacy, security and economic fairness.

 

 
© 2025 | Daniel Christian