Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Artificial Intelligence in Higher Education: Applications, Promise and Perils, and Ethical Questions — from er.educause.edu by Elana Zeide
What are the benefits and challenges of using artificial intelligence to promote student success, improve retention, streamline enrollment, and better manage resources in higher education?

Excerpt:

The promise of AI applications lies partly in their efficiency and partly in their efficacy. AI systems can capture a much wider array of data, at more granularity, than can humans. And these systems can do so in real time. They can also analyze many, many students—whether those students are in a classroom or in a student body or in a pool of applicants. In addition, AI systems offer excellent observations and inferences very quickly and at minimal cost. These efficiencies will lead, we hope, to increased efficacy—to more effective teaching, learning, institutional decisions, and guidance. So this is one promise of AI: that it will show us things we can’t assess or even envision given the limitations of human cognition and the difficulty of dealing with many different variables and a wide array of students.

A second peril in the use of artificial intelligence in higher education consists of the various legal considerations, mostly involving different bodies of privacy and data-protection law. Federal student-privacy legislation is focused on ensuring that institutions (1) get consent to disclose personally identifiable information and (2) give students the ability to access their information and challenge what they think is incorrect.7 The first is not much of an issue if institutions are not sharing the information with outside parties or if they are sharing through the Family Educational Rights and Privacy Act (FERPA), which means an institution does not have to get explicit consent from students. The second requirement—providing students with access to the information that is being used about them—is going to be an increasingly interesting issue.8 I believe that as the decisions being made by artificial intelligence become much more significant and as students become more aware of what is happening, colleges and universities will be pressured to show students this information. People are starting to want to know how algorithmic and AI decisions are impacting their lives.

My short advice about legal considerations? Talk to your lawyers. The circumstances vary considerably from institution to institution.

 

Is virtual reality the future of online learning? — from builtin.com by Stephen Gossett; with thanks to Dane Lancaster for his tweet on this (see below)
Education is driving the future of VR more than any other industry outside of gaming. Here’s why virtual reality gets such high marks for tutoring, STEM development, field trips and distance education.

 

 

 

Technology as Part of the Culture for Legal Professionals -- a Q&A with Mary Grush and Daniel Christian

 


Technology as Part of the Culture for Legal Professionals A Q&A with Daniel Christian — from campustechnology.com by Mary Grush and Daniel Christian

Excerpt (emphasis DSC):

Mary Grush: Why should new technologies be part of a legal education?

Daniel Christian: I think it’s a critical point because our society, at least in the United States — and many other countries as well — is being faced with a dramatic influx of emerging technologies. Whether we are talking about artificial intelligence, blockchain, Bitcoin, chatbots, facial recognition, natural language processing, big data, the Internet of Things, advanced robotics — any of dozens of new technologies — this is the environment that we are increasingly living in, and being impacted by, day to day.

It is so important for our nation that legal professionals — lawyers, judges, attorney generals, state representatives, and legislators among them — be up to speed as much as possible on the technologies that surround us: What are the issues their clients and constituents face? It’s important that legal professionals regularly pulse check the relevant landscapes to be sure that they are aware of the technologies that are coming down the pike. To help facilitate this habit, technology should be part of the culture for those who choose a career in law. (And what better time to help people start to build that habit than within the law schools of our nation?)

 

There is a real need for the legal realm to catch up with some of these emerging technologies, because right now, there aren’t many options for people to pursue. If the lawyers, and the legislators, and the judges don’t get up to speed, the “wild wests” out there will continue until they do.

 


 

An artificial-intelligence first: Voice-mimicking software reportedly used in a major theft — from washingtonpost.com by Drew Harwell

Excerpt:

Thieves used voice-mimicking software to imitate a company executive’s speech and dupe his subordinate into sending hundreds of thousands of dollars to a secret account, the company’s insurer said, in a remarkable case that some researchers are calling one of the world’s first publicly reported artificial-intelligence heists.

The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company.

 

From DSC:
Needless to say, this is very scary stuff here! Now what…? Who in our society should get involved to thwart this kind of thing?

  • Programmers?
  • Digital audio specialists?
  • Legislators?
  • Lawyers?
  • The FBI?
  • Police?
  • Other?


Addendum on 9/12/19:

 

40+ Emerging IoT Technologies you should have on your radar — from iot-analytics.com by Knud Lasse Lueth

Excerpt:

As part of the “State of the IoT – Summer 2019 Update”, the analyst team at IoT Analytics handpicked 43 of the most promising technologies that are relevant to IoT projects around the globe. The team ranked the IoT technologies according to their perceived maturity (based on expert interviews, vendor briefings, secondary research, and conference attendances).

 

 

The Age of AI: How Will In-house Law Departments Run in 10 Years? — from accdocket.com by Elizabeth Colombo

Excerpt:

2029 may feel far away right now, but all of this makes me wonder what in-house law might look like in 10 years. What will in-house law be like in an age of artificial intelligence (AI)? This article will look at how in-house law may be different in 10 years, focusing largely on anticipated changes to contract review and negotiation, and the workplace.

 

Also see:
A Primer on Using Artificial Intelligence in the Legal Profession — from jolt.law.harvard.edu by Lauri Donahue (2018)

Excerpt (emphasis DSC):

How Are Lawyers Using AI?
Lawyers are already using AI to do things like reviewing documents during litigation and due diligence, analyzing contracts to determine whether they meet pre-determined criteria, performing legal research, and predicting case outcomes.


Document Review

Analyzing Contracts

Legal Research

Predicting Results
Lawyers are often called upon to predict the future: If I bring this case, how likely is it that I’ll win — and how much will it cost me? Should I settle this case (or take a plea), or take my chances at trial? More experienced lawyers are often better at making accurate predictions, because they have more years of data to work with.

However, no lawyer has complete knowledge of all the relevant data.

Because AI can access more of the relevant data, it can be better than lawyers at predicting the outcomes of legal disputes and proceedings, and thus helping clients make decisions. For example, a London law firm used data on the outcomes of 600 cases over 12 months to create a model for the viability of personal injury cases. Indeed, trained on 200 years of Supreme Court records, an AI is already better than many human experts at predicting SCOTUS decisions.

 

 

 

Also see:

 

 


From DSC:
This type of thing makes me wonder about the future of the legal profession as well. For example, here’s a relevant quote from The Uberization of Legal Technology by Felix Shipkevich:

In an age when there’s an app for everything, whether it’s to book air travel, rent a car, sell products or start a business, there wasn’t an app that could simply and easily connect you with legal counsel. Giving consumers a tool to book free consultations is the future of law, and the heart of attorney business development. 

Consumers have historically had little access to attorneys for a variety of reasons. First, unlike for doctors and mechanics, there is no annual legal checkup (though perhaps there should be). Consumers may be intimidated by not knowing costs upfront or even knowing if they have a case worth discussing. Assuming that every American will have at least three legal questions annually, there’s an untapped market of over a billion potential legal inquiries every year.


 

And by the way, as legal-related matters aren’t taught much in K-16, that’s an interesting idea:

First, unlike for doctors and mechanics, there is no annual legal checkup (though perhaps there should be).

 


 

 

What to expect at IFA 2019, Europe’s colossal tech show — from digitaltrends.com by Josh Levenson

Excerpt:

This week, the world’s leading manufacturers will take to the stage at IFA 2019 in Berlin, Germany, to showcase their latest innovations. Here’s what you need to know about this year’s show, including when it’s set to start, how long it will run for, where it’s held, the schedule, and all the devices we’re expecting to see unveiled by the likes of LG, Sony, Samsung, and more.

 

Samsung’s take on the world of 2069 — from newatlas.com by David Szondy

Excerpt:

Samsung is looking forward to what life might be like in the year 2069. The new report, called Samsung KX50: The Future in Focus, draws on the opinions of six of Britain’s leading academics and futurists to look at a range of new technologies that will affect people’s everyday lives.

 

 

 
 

Why GCs Aren’t Buying What Legal Tech Is Selling and Why It Matters for Firms — from law.com by Zach Warren and Gina Passarella Cipriani
Legal technology companies have to get out of their own way in vying for law department adoption, and buyers need to know what they want.

Excerpt:

The legal technology industry has some significant hurdles to overcome in its increased push to sell into legal departments, general counsel say. And GCs admit that they are part of the problem.

On the one hand, technology companies aren’t doing themselves any favors by flooding the market with, at times, dozens of the same offerings, few of which solve specific problems the in-house community has, GCs say. But at the same time, general counsel admit to being distracted, budget-constrained and often unfamiliar with the capabilities of the products they are being pitched.

“It’s overwhelming,” says HUB International chief legal officer John Albright. “There are hundreds of these vendors, and most of them you’ve never heard of.”

As Albright sees it, the legal technology industry is “heavily fragmented,” with vendors selling solutions to a discrete issue that doesn’t necessarily solve the full problem he has or fit into the larger organization’s information systems.

 

Also see:

  • Artificial Intelligence Further Exacerbates Inequality In Discrimination Lawsuits — from forbes.com by Patricia Barnes
    Excerpt:
    The legal system just keeps getting more and more unequal for American workers who are victims of employment discrimination, wage and hour theft, etc. The newest development is that America’s top employers and the law firms that represent them are using artificial intelligence (AI) tools to automate their responses to workers’ legal claims, thereby increasing efficiency while cutting costs.
 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

Amazon, Microsoft, ‘putting world at risk of killer AI’: study — from news.yahoo.com by Issam Ahmed

Excerpt:

Washington (AFP) – Amazon, Microsoft and Intel are among leading tech companies putting the world at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

“Why are companies like Microsoft and Amazon not denying that they’re currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?” said Frank Slijper, lead author of the report published this week.

Addendum on 8/23/19:

 

Autonomous robot deliveries are coming to 100 university campuses in the U.S. — from digitaltrends.com by Luke Dormehl

Excerpt:

Pioneering autonomous delivery robot company Starship Technologies is coming to a whole lot more university campuses around the U.S. The robotics startup announced that it will expand its delivery services to 100 university campuses in the next 24 months, building on its successful fleets at George Mason University and Northern Arizona University.

 

Postmates Gets Go-Ahead to Test Delivery Robot in San Francisco — from interestingengineering.com by Donna Fuscaldo
Postmates was granted permission to test a delivery robot in San Francisco.

 

And add those to ones formerly posted on Learning Ecosystems:

 

From DSC:
I’m grateful for John Muir and for the presidents of the United States who had the vision to set aside land for the national park system. Such parks are precious and provide much needed respite from the hectic pace of everyday life.

Closer to home, I’m grateful for what my parents’ vision was for a place to help bring the families together through the years. A place that’s peaceful, quiet, surrounded by nature and community.

So I wonder what kind of legacy the current generations are beginning to create? That is…do we really want to be known as the generations who created the unchecked chaotic armies of delivery drones, delivery robots, driverless pods, etc. to fill the skies, streets, sidewalks, and more? 

I don’t. That’s not a gift to our kids or grandkids…not at all.

 

 
© 2024 | Daniel Christian