Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

Everyday Media Literacy — from routledge.com by Sue Ellen Christian
An Analog Guide for Your Digital Life, 1st Edition

Description:

In this graphic guide to media literacy, award-winning educator Sue Ellen Christian offers students an accessible, informed and lively look at how they can consume and create media intentionally and critically.

The straight-talking textbook offers timely examples and relevant activities to equip students with the skills and knowledge they need to assess all media, including news and information. Through discussion prompts, writing exercises, key terms, online links and even origami, readers are provided with a framework from which to critically consume and create media in their everyday lives. Chapters examine news literacy, online activism, digital inequality, privacy, social media and identity, global media corporations and beyond, giving readers a nuanced understanding of the key concepts and concerns at the core of media literacy.

Concise, creative and curated, this book highlights the cultural, political and economic dynamics of media in our contemporary society, and how consumers can mindfully navigate their daily media use. Everyday Media Literacy is perfect for students (and educators) of media literacy, journalism, education and media effects looking to build their understanding in an engaging way.

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

There are major issues with AI. This article shows how far the legal realm is in wrestling with emerging technologies.

What happens when employers can read your facial expressions? — from nytimes.com by Evan Selinger and Woodrow Hartzog
The benefits do not come close to outweighing the risks.

Excerpts:

The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.

 “Notice and choice” has been an abysmal failure. Social media companies, airlines and retailers overhype the short-term benefits of facial recognition while using unreadable privacy policiesClose X and vague disclaimers that make it hard to understand how the technology endangers users’ privacy and freedom.

 

From DSC:
This article illustrates how far behind the legal realm is in the United States when we look at where our society is at with wrestling with emerging technologies. Dealing with this relatively new *exponential* pace of change is very difficult for many of our institutions to deal with (higher education and the legal realm come to my mind here).

 

 

Three threats posed by deepfakes that technology won’t solve — from technologyreview.com by Angela Chen
As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.

Excerpt:

3) Problem: Deepfake detection is too late to help victims
With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

 

The blinding of justice: Technology, journalism and the law — from thehill.com by Kristian Hammond and Daniel Rodriguez

Excerpts:

The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.

If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.

We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.

Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.

We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.

 

From DSC:
This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

From DSC:
A couple of somewhat scary excerpts from Meet Hemingway: The Artificial Intelligence Robot That Can Copy Your Handwriting (from forbes.com by Bernard Marr):

The Handwriting Company now has a robot that can create beautifully handwritten communication that mimics the style of an individual’s handwriting while a robot from Brown University can replicate handwriting from a variety of languages even though it was just trained on Japanese characters.

Hemingway is The Handwriting Company’s robot that can mimic anyone’s style of handwriting. All that Hemingway’s algorithm needs to mimic an individual’s handwriting is a sample of handwriting from that person.

 

From DSC:
So now there are folks out there that can generate realistic “fakes” using videos, handwriting, audio and more. Super. Without technologies to determine such fakes, things could get ugly…especially as we approach a presidential election next year. I’m trying not to be negative, but it’s hard when the existence of fakes is a serious topic and problem these days.

 

Addendum on 7/5/19:
AI poised to ruin Internet using “massive tsunami” of fake news — from futurism.com

“Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning… we feel it is an incredibly important topic with far too little discussion currently,” Tynski told The Verge.

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 
 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

135 Million Reasons To Believe In A Blockchain Miracle — from forbes.com by Mike Maddock

Excerpts:

Which brings us to the latest headlines about a cryptocurrency entrepreneur’s passing—taking with him the passcode to unlock C$180 million (about $135 million U.S.) in investor currency—which is now reportedly gone forever. Why? Because apparently, the promise of blockchain is true: It cannot be hacked. It is absolutely trustworthy.

Gerald Cotton, the CEO of a crypto company, reportedly passed away recently while building an orphanage in India. Unfortunately, he was the only person who knew the passcode to access the millions his investors had entrusted in him.

This is how we get the transition to Web 3.0.

Some questions to consider:

  • Who will build an easy-to-use “wallet” of the future?
  • Are we responsible enough to handle that much power?

Perhaps the most important question of all is: What role do our “trusted” experts play in this future?

 


From DSC:
I’d like to add another question to Mike’s article:

  • How should law schools, law firms, legislative bodies, government, etc. deal with the new, exponential pace of change and with the power of emerging technologies like , ,  ,  etc.?

 


 

 
© 2025 | Daniel Christian