Gartner: 10 ways technology will change what it means to be human — from campustechnology.com by Rhea Kelly

Excerpts:

Gartner’s top 10 strategic predictions for technology are:

  1. “By 2023, the number of people with disabilities employed will triple due to AI and emerging technologies, reducing barriers to access.”
  2. “By 2024, AI identification of emotions will influence more than half of the online advertisements you see.”
  3. “Through 2023, 30 percent of IT organizations will extend BYOD policies with ‘bring your own enhancement’ (BYOE) to address augmented humans in the workforce.”
  4. “By 2025, 50 percent of people with a smartphone but without a bank account will use a mobile-accessible cryptocurrency account.”
  5. “By 2023, a self-regulating association for oversight of AI and machine learning designers will be established in at least four of the G7 countries.”
  6. “By 2023, 40 percent of professional workers will orchestrate their business application experiences and capabilities like they do their music streaming experience.”
  7. “By 2023, up to 30 percent of world news and video content will be authenticated as real by blockchain countering deep fake technology.”
  8. “Through 2021, digital transformation initiatives will take large traditional enterprises on average twice as long and cost twice as much as anticipated.”
  9. “By 2023, individual activities will be tracked digitally by an ‘Internet of Behavior’ to influence benefit and service eligibility for 40 percent of people worldwide.”
  10. “By 2024, the World Health Organization will identify online shopping as an addictive disorder, as millions abuse digital commerce and encounter financial stress.”

Facial recognition, location tracking and big data will allow organizations to monitor individual behavior and link that behavior to other digital actions, Gartner said, noting that “The Internet of Things (IoT) – where physical things are directed to do a certain thing based on a set of observed operating parameters relative to a desired set of operating parameters — is now being extended to people, known as the Internet of Behavior (IoB).”

 

From DSC:
That last quote about the “Internet of Behavior (IoB)” should disturb us. I don’t want that kind of world for the next generation. 

 

From DSC:
I’ll say it again, just because we can, doesn’t mean we should.

From the article below…we can see another unintended consequence is developing on society’s landscapes. I really wish the 20 and 30 somethings that are being hired by the big tech companies — especially at Amazon, Facebook, Google, Apple, and Microsoft — who are developing these things would ask themselves:

  • “Just because we can develop this system/software/application/etc., SHOULD we be developing it?”
  • What might the negative consequences be? 
  • Do the positive contributions outweigh the negative impacts…or not?

To colleges professors and teachers:
Please pass these thoughts onto your students now, so that this internal questioning/conversations begin to take place in K-16.


Report: Colleges Must Teach ‘Algorithm Literacy’ to Help Students Navigate Internet — from edsurge.com by Rebecca Koenig

Excerpt (emphasis DSC):

If the Ancient Mariner were sailing on the internet’s open seas, he might conclude there’s information everywhere, but nary a drop to drink.

That’s how many college students feel, anyway. A new report published this week about undergraduates’ impressions of internet algorithms reveals students are skeptical of and unnerved by tools that track their digital travels and serve them personalized content like advertisements and social media posts.

And some students feel like they’ve largely been left to navigate the internet’s murky waters alone, without adequate guidance from teachers and professors.

Researchers set out to learn “how aware students are about their information being manipulated, gathered and interacted with,” said Alison Head, founder and director of Project Information Literacy, in an interview with EdSurge. “Where does that awareness drop off?”

They found that many students not only have personal concerns about how algorithms compromise their own data privacy but also recognize the broader, possibly negative implications of tools that segment and customize search results and news feeds.

 

The Future of Lawyers: Legal Tech, AI, Big Data And Online Courts — from forbes.com by Bernard Marr

Excerpts:

In his brand new book Online Courts and the Future of Justice, Richard argues that technology is going to bring about a fascinating decade of change in the legal sector and transform our court system. Although automating our old ways of working plays a part in this, even more, critical is that artificial intelligence and technology will help give more individuals access to justice.

The first generation is the idea that people who use the court system submit evidence and arguments to the judge online or through some form of electronic communication.

The second generation of using technology to transform the legal system would be what Richard calls “outcome thinking” to use technology to help solve disputes without requiring lawyers or the traditional court system.

Some of the biggest obstacles to an online court system are the political will to bring about such a transformation, the support of judges and lawyers, funding, as well as the method we’d apply. For example, decisions will need to be made whether the online system would be used for only certain cases or situations.

Ultimately, we have a grave access-to-justice problem. Technology can help improve our outcomes and give people a way to resolve public disputes in ways that previously weren’t possible. While this transformation might not solve all the struggles with the legal system or the access-to-justice issue, it can offer a dramatic improvement.

 

FuturePod gathers voices from the international field of Futures and Foresight. Through a series of interviews, the founders of the field and emerging leaders share their stories, tools and experiences.

FuturePod gathers voices from the international field of Futures and Foresight. Through a series of podcast interviews, the founders of the field and emerging leaders share their stories, tools and experiences. The Futures and Foresight community comprises a remarkable and diverse group of individuals who span, academic, commercial and social interests.

 

19 striking findings from 2019 — from pewresearch.org by John Gramlich

Excerpt:

Every year, Pew Research Center publishes hundreds of reports, blog posts, digital essays and other studies on a wide range of topics, from the demographic and political changes that are reshaping the United States to the attitudes and experiences of people in dozens of other countries. At the end of each year, we compile a list of some of our most noteworthy findings. Here are 19 striking findings from our research this year…


A majority of Americans do not think it is possible to go about daily life without corporate and government entities collecting information about them. Americans widely believe at least some of their online and offline activities are being tracked and monitored by companies and the government. It is such a common condition of modern life, in fact, that roughly six-in-ten U.S. adults say they don’t think it is possible to go through daily life without having data collected about them by companies (62% say this) or the government (63%).

Read the other posts in our striking findings series:

 

Greta Thunberg is the youngest TIME Person of the Year ever. Here’s how she made history — from time.com

Excerpt:

The politics of climate action are as entrenched and complex as the phenomenon itself, and Thunberg has no magic solution. But she has succeeded in creating a global attitudinal shift, transforming millions of vague, middle-of-the-night anxieties into a worldwide movement calling for urgent change. She has offered a moral clarion call to those who are willing to act, and hurled shame on those who are not. She has persuaded leaders, from mayors to Presidents, to make commitments where they had previously fumbled: after she spoke to Parliament and demonstrated with the British environmental group Extinction Rebellion, the U.K. passed a law requiring that the country eliminate its carbon footprint. She has focused the world’s attention on environmental injustices that young indigenous activists have been protesting for years. Because of her, hundreds of thousands of teenage “Gretas,” from Lebanon to Liberia, have skipped school to lead their peers in climate strikes around the world.

 

Young people! You CAN and will make a big impact/difference!

 

Why AI is a threat to democracy – and what we can do to stop it — from asumetech.com by Lawrence Cole

Excerpts:

In the US, however, we also have a tragic lack of foresight. Instead of creating a grand strategy for AI or for our long-term futures, the federal government has removed the financing of scientific and technical research. The money must therefore come from the private sector. But investors also expect a certain return. That is a problem. You cannot plan your R&D breakthroughs when working on fundamental technology and research. It would be great if the big tech companies had the luxury of working very hard without having to organize an annual conference to show off their newest and best whiz bang thing. Instead, we now have countless examples of bad decisions made by someone in the G-MAFIA, probably because they worked quickly. We begin to see the negative effects of the tension between doing research that is in the interest of humanity and making investors happy.

The problem is that our technology has become increasingly sophisticated, but our thinking about what free speech is and what a free market economy looks like has not become that advanced. We tend to resort to very basic interpretations: free speech means that all speech is free, unless it conflicts with defamation laws, and that’s the end of the story. That is not the end of the story. We need to start a more sophisticated and intelligent conversation about our current laws, our emerging technology, and how we can make the two meet halfway.

 

So I absolutely believe that there is a way forward. But we have to come together and bridge the gap between Silicon Valley and DC, so that we can all steer the boat in the same direction.

— Amy Webb, futurist, NYU professor, founder of the Future Today Institute

 

Also see:

“FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.”

The film has five distinct messages about:

1. China’s AI Plan
2. The Promise of AI
3. The Future of Work
4. Surveillance Capitalism
5. The Surveillance State

 

Everyday Media Literacy — from routledge.com by Sue Ellen Christian
An Analog Guide for Your Digital Life, 1st Edition

Description:

In this graphic guide to media literacy, award-winning educator Sue Ellen Christian offers students an accessible, informed and lively look at how they can consume and create media intentionally and critically.

The straight-talking textbook offers timely examples and relevant activities to equip students with the skills and knowledge they need to assess all media, including news and information. Through discussion prompts, writing exercises, key terms, online links and even origami, readers are provided with a framework from which to critically consume and create media in their everyday lives. Chapters examine news literacy, online activism, digital inequality, privacy, social media and identity, global media corporations and beyond, giving readers a nuanced understanding of the key concepts and concerns at the core of media literacy.

Concise, creative and curated, this book highlights the cultural, political and economic dynamics of media in our contemporary society, and how consumers can mindfully navigate their daily media use. Everyday Media Literacy is perfect for students (and educators) of media literacy, journalism, education and media effects looking to build their understanding in an engaging way.

 

Are smart cities the pathway to blockchain and cryptocurrency adoption? — from forbes.com by Chrissa McFarlane

Excerpts:

At the recent Blockchain LIVE 2019 hosted annually in London, I had the pleasure of giving a talk on Next Generation Infrastructure: Building a Future for Smart Cities. What exactly is a “smart city?” The term refers to an overall blueprint for city designs of the future. Already half the world’s population lives in a city, which is expected to grow to sixty-five percent in the next five years. Tackling that growth takes more than just simple urban planning. The goal of smart cities is to incorporate technology as an infrastructure to alleviate many of these complexities. Green energy, forms of transportation, water and pollution management, universal identification (ID), wireless Internet systems, and promotion of local commerce are examples of current of smart city initiatives.

What’s most important to a smart city, however, is integration. None of the services mentioned above exist in a vacuum; they need to be put into a single system. Blockchain provides the technology to unite them into a single system that can track all aspects combined.

 

From DSC:
There are many examples of the efforts/goals of creating smart cities (throughout the globe) in the above article. Also see the article below.

 

There are major issues with AI. This article shows how far the legal realm is in wrestling with emerging technologies.

What happens when employers can read your facial expressions? — from nytimes.com by Evan Selinger and Woodrow Hartzog
The benefits do not come close to outweighing the risks.

Excerpts:

The essential and unavoidable risks of deploying these tools are becoming apparent. A majority of Americans have functionally been put in a perpetual police lineup simply for getting a driver’s license: Their D.M.V. images are turned into faceprints for government tracking with few limits. Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing. Detroit aspires to use facial recognition for round-the-clock monitoring. Americans are losing due-process protections, and even law-abiding citizens cannot confidently engage in free association, free movement and free speech without fear of being tracked.

 “Notice and choice” has been an abysmal failure. Social media companies, airlines and retailers overhype the short-term benefits of facial recognition while using unreadable privacy policiesClose X and vague disclaimers that make it hard to understand how the technology endangers users’ privacy and freedom.

 

From DSC:
This article illustrates how far behind the legal realm is in the United States when we look at where our society is at with wrestling with emerging technologies. Dealing with this relatively new *exponential* pace of change is very difficult for many of our institutions to deal with (higher education and the legal realm come to my mind here).

 

 

Three threats posed by deepfakes that technology won’t solve — from technologyreview.com by Angela Chen
As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.

Excerpt:

3) Problem: Deepfake detection is too late to help victims
With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

 

The blinding of justice: Technology, journalism and the law — from thehill.com by Kristian Hammond and Daniel Rodriguez

Excerpts:

The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.

If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.

We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.

Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.

We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.

 

From DSC:
This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.

 

Google’s war on deepfakes: As election looms, it shares ton of AI-faked videos — from zdnet.com by Liam Tung
Google has created 3,000 videos using actors and manipulation software to help improve detection.

Excerpt:

Google has released a huge database of deepfake videos that it’s created using paid actors. It hopes the database will bolster systems designed to detect AI-generated fake videos.

With the 2020 US Presidential elections looming, the race is on to build better systems to detect deepfake videos that could be used to manipulate and divide public opinion.

Earlier this month, Facebook and Microsoft announced a $10m project to create deepfake videos to help build systems for detecting them.

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian