Why emerging technology needs to retain a human element — from forbes.com by Samantha Radocchia
Technology opens up new, unforeseen issues. And humans are necessary for solving the problems automated services can’t.

Excerpt (emphasis DSC):

With technological advancements comes change. Rather than avoiding new technology for as long as possible, and then accepting the inevitable, people need to be actively thinking about how it will change us as individuals and as a society.

Take your phone for instance. The social media, gaming and news apps are built to keep you addicted so companies can collect data on you. They’re designed to be used constantly so you back for more the instant you feel the slightest twinge of boredom.

And yet, other apps—sometimes the same ones I just mentioned—allow you to instantly communicate with people around the world. Loved ones, colleagues, old friends—they’re all within reach now.

Make any technology decisions carefully, because their impact down the road may be tremendous.

This is part of the reason why there’s been a push lately for ethics to be a required part of any computer science or vocational training program. And it makes sense. If people want to create ethical systems, there’s a need to remember that actual humans are behind them. People make bad choices sometimes. They make mistakes. They aren’t perfect.

 

To ignore the human element in tech is to miss the larger point: Technology should be about empowering people to live their best lives, not making them fearful of the future.

 

 

 

 

About Law2020: The Podcast
Last month we launched the Law2020 podcast, an audio companion to Law2020, our four-part series of articles about how artificial intelligence and similar emerging technologies are reshaping the practice and profession of law. The podcast episodes and featured guests are as follows:

  1. Access to JusticeDaniel Linna, Professor of Law in Residence and the Director of LegalRnD – The Center for Legal Services Innovation at Michigan State University College of Law.
  2. Legal EthicsMegan Zavieh, ethics and state bar defense lawyer.
  3. Legal ResearchDon MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How To Find Out Anything and The Internet Guide for the Legal Researcher.
  4. Legal AnalyticsAndy Martens, SVP & Global Head Legal Product and Editorial at Thomson Reuters.

The podcasts are short and lively, and we hope you’ll give them a listen. And if you haven’t done so already, we invite you to read the full feature stories over at the Law2020 website. Enjoy!

Listen to Law2020 Podcast

 

Activists urge killer robot ban ‘before it is too late’ — from techxplore.com by Nina Larson

Excerpt:

Countries should quickly agree a treaty banning the use of so-called killer robots “before it is too late”, activists said Monday as talks on the issue resumed at the UN.

They say time is running out before weapons are deployed that use lethal force without a human making the final kill-order and have criticised the UN body hosting the talks—the Convention of Certain Conventional Weapons (CCW)—for moving too slowly.

“Killer robots are no longer the stuff of science fiction,” Rasha Abdul Rahim, Amnesty International’s advisor on artificial intelligence and human rights, said in a statement.

“From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law,” she said.

 

Activists urge killer robot ban before it is too late

 

From DSC:
I’ve often considered how much out front many technologies are in our world today. It takes the rest of society some time to catch up with emerging technologies and ask whether we should be implementing technology A, B, or C.  Just because we can, doesn’t mean we should. A worn-out statement perhaps, but given the exponential pace of technological change, one that is highly relevant to our world today.

 

 



Addendum on 9/8/18:



 

 

Smart Machines & Human Expertise: Challenges for Higher Education — from er.educause.edu by Diana Oblinger

Excerpts:

What does this mean for higher education? One answer is that AI, robotics, and analytics become disciplines in themselves. They are emerging as majors, minors, areas of emphasis, certificate programs, and courses in many colleges and universities. But smart machines will catalyze even bigger changes in higher education. Consider the implications in three areas: data; the new division of labor; and ethics.

 

Colleges and universities are challenged to move beyond the use of technology to deliver education. Higher education leaders must consider how AI, big data, analytics, robotics, and wide-scale collaboration might change the substance of education.

 

Higher education leaders should ask questions such as the following:

  • What place does data have in our courses?
  • Do students have the appropriate mix of mathematics, statistics, and coding to understand how data is manipulated and how algorithms work?
  • Should students be required to become “data literate” (i.e., able to effectively use and critically evaluate data and its sources)?

Higher education leaders should ask questions such as the following:

  • How might problem-solving and discovery change with AI?
  • How do we optimize the division of labor and best allocate tasks between humans and machines?
  • What role do collaborative platforms and collective intelligence have in how we develop and deploy expertise?


Higher education leaders should ask questions such as the following:

  • Even though something is possible, does that mean it is morally responsible?
  • How do we achieve a balance between technological possibilities and policies that enable—or stifle—their use?
  • An algorithm may represent a “trade secret,” but it might also reinforce dangerous assumptions or result in unconscious bias. What kind of transparency should we strive for in the use of algorithms?

 

 

 

It’s time to address artificial intelligence’s ethical problems — from wired.co.uk by Abigail Beall
AI is already helping us diagnose cancer and understand climate change, but regulation and oversight are needed to stop the new technology being abused

Excerpt:

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

 

 

‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

Responsibility & AI: ‘We all have a role when it comes to shaping the future’ — from re-work.co by Fiona McEvoy

Excerpt:

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

For reasons that are somewhat understandable, at present much of this tech ethics talk happens behind closed doors, and typically only engages a handful of industry and academic voices. Currently, these elite figures are the only participants in a dialogue that will determine all of our futures. At least in part, I started YouTheData.com because I wanted to bring “ivory tower” discussions down to the level of the engaged consumer, and be part of efforts to democratize this particular consultation process. As a former campaigner, I place a lot of value in public awareness and scrutiny.

To be clear, the message I wish to convey is not a criticism of the worthy academic and advisory work being done in this field (indeed, I have some small hand in this myself). It’s about acknowledging that engineers, technologists – and now ethicists, philosophers and others – still ultimately need public assent and a level of consumer “buy in” that is only really possible when complex ideas are made more accessible.

 

 

Digital Surgery’s AI platform guides surgical teams through complex procedures — from venturebeat.com by Kyle Wiggers

Excerpt:

Digital Surgery, a health tech startup based in London, today launched what it’s calling the world’s first dynamic artificial intelligence (AI) system designed for the operating room. The reference tool helps support surgical teams through complex medical procedures — cofounder and former plastic surgeon Jean Nehme described it as a “Google Maps” for surgery.

“What we’ve done is applied artificial intelligence … to procedures … created with surgeons globally,” he told VentureBeat in a phone interview. “We’re leveraging data with machine learning to build a [predictive] system.”

 

 

Why business Lleaders need to embrace artificial intelligence — from thriveglobal.com by Howard Yu
How companies should work with AI—not against it.

 

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras — from nytimes.com by Paul Mozur

Excerpts:

ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

 

In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.

Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.

 

 

A very slippery slope has now been setup in China with facial recognition infrastructures

 

From DSC:
A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?  

Very troubling stuff here….

 

 

 

State of AI — from stateof.ai

Excerpt:

In this report, we set out to capture a snapshot of the exponential progress in AI with a focus on developments in the past 12 months. Consider this report as a compilation of the most interesting things we’ve seen that seeks to trigger informed conversation about the state of AI and its implication for the future.

We consider the following key dimensions in our report:

  • Research: Technology breakthroughs and their capabilities.
  • Talent: Supply, demand and concentration of talent working in the field.
  • Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
  • Politics: Public opinion of AI, economic implications and the emerging geopolitics of AI.

 

definitions of terms involved in AI

definitions of terms involved in AI

 

hard to say how AI is impacting jobs yet -- but here are 2 perspectives

 

 

There’s nothing artificial about how AI is changing the workplace — from forbes.com by Eric Yuan

Excerpt:

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

 

AI can now ‘listen’ to machines to tell if they’re breaking down — from by Rebecca Campbell

Excerpt:

Sound is everywhere, even when you can’t hear it.

It is this noiseless sound, though, that says a lot about how machines function.

Helsinki-based Noiseless Acoustics and Amsterdam-based OneWatt are relying on artificial intelligence (AI) to better understand the sound patterns of troubled machines. Through AI they are enabling faster and easier problem detection.

 

Making sound visible even when it can’t be heard. With the aid of non-invasive sensors, machine learning algorithms, and predictive maintenance solutions, failing components can be recognized at an early stage before they become a major issue.

 

 

 

Chinese university uses facial recognition for campus entry — from cr80news.com by Andrew Hudson

Excerpt:

A number of higher education institutions in China have deployed biometric solutions for access and payments in recent months, and adding to the list is Peking University. The university has now installed facial recognition readers at perimeter access gates to control access to its Beijing campus.

As reported by the South China Morning Post, anyone attempting to enter through the southwestern gate of the university will no longer have to provide a student ID card. Starting this month, students will present their faces to a camera as part of a trial run of the system ahead of full-scale deployment.

From DSC:
I’m not sure I like this one at all — and the direction that this is going in. 

 

 

 

Will We Use Big Data to Solve Big Problems? Why Emerging Technology is at a Crossroads — from blog.hubspot.com by Justin Lee

Excerpt:

How can we get smarter about machine learning?
As I said earlier, we’ve reached an important crossroads. Will we use new technologies to improve life for everyone, or to fuel the agendas of powerful people and organizations?

I certainly hope it’s the former. Few of us will run for president or lead a social media empire, but we can all help to move the needle.

Consume information with a critical eye.
Most people won’t stop using Facebook, Google, or social media platforms, so proceed with a healthy dose of skepticism. Remember that the internet can never be objective. Ask questions and come to your own conclusions.

Get your headlines from professional journalists.
Seek credible outlets for news about local, national and world events. I rely on the New York Times and the Wall Street Journal. You can pick your own sources, but don’t trust that the “article” your Aunt Marge just posted on Facebook is legit.

 

 

 

 

Welcome to Law2020: Artificial Intelligence and the Legal Profession — from abovethelaw.com by David Lat and Brian Dalton
What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world?

Excerpt:

Artificial intelligence has been declared “[t]he most important general-purpose technology of our era.” It should come as no surprise to learn that AI is transforming the legal profession, just as it is changing so many other fields of endeavor.

What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world? Will AI automate the work of attorneys — or will it instead augment, helping lawyers to work more efficiently, effectively, and ethically?

 

 

 

 

How artificial intelligence is transforming the world — from brookings.edu by Darrell M. West and John R. Allen

Summary

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents

I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion


In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

 

 

Seven Artificial Intelligence Advances Expected This Year  — from forbes.com

Excerpt:

Artificial intelligence (AI) has had a variety of targeted uses in the past several years, including self-driving cars. Recently, California changed the law that required driverless cars to have a safety driver. Now that AI is getting better and able to work more independently, what’s next?

 

 

Google Cofounder Sergey Brin Warns of AI’s Dark Side — from wired.com by Tom Simonite

Excerpt (emphasis DSC):

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

 

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

 

 

 

 

Europe divided over robot ‘personhood’ — from politico.eu by Janosch Delcker

Excerpt:

BERLIN — Think lawsuits involving humans are tricky? Try taking an intelligent robot to court.

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

SXSW 2018: Key trends — from jwtintelligence.com by Marie Stafford w/ contributions by Sarah Holbrook

Excerpt:

Ethics & the Big Tech Backlash
What a difference a week makes. As the Cambridge Analytica scandal broke last weekend, the curtain was already coming down on SXSW. Even without this latest bombshell, the discussion around ethics in technology was animated, with more than 10 panels devoted to the theme. From misinformation to surveillance, from algorithmic bias to the perils of artificial intelligence (hi Elon!) speakers grappled with the weighty issue of how to ensure technology works for the good of humanity.

The Human Connection
When technology provokes this much concern, it’s perhaps natural that people should seek respite in human qualities like empathy, understanding and emotional connection.

In a standout keynote, couples therapist Esther Perel gently berated the SXSW audience for neglecting to focus on human relationships. “The quality of your relationships,” she said, “is what determines the quality of your life.

 

 

 

 

China’s New Frontiers in Dystopian Tech — from theatlantic.com by Rene Chun
Facial-recognition technologies are proliferating, from airports to bathrooms.

Excerpt:

China is rife with face-scanning technology worthy of Black Mirror. Don’t even think about jaywalking in Jinan, the capital of Shandong province. Last year, traffic-management authorities there started using facial recognition to crack down. When a camera mounted above one of 50 of the city’s busiest intersections detects a jaywalker, it snaps several photos and records a video of the violation. The photos appear on an overhead screen so the offender can see that he or she has been busted, then are cross-checked with the images in a regional police database. Within 20 minutes, snippets of the perp’s ID number and home address are displayed on the crosswalk screen. The offender can choose among three options: a 20-yuan fine (about $3), a half-hour course in traffic rules, or 20 minutes spent assisting police in controlling traffic. Police have also been known to post names and photos of jaywalkers on social media.

The technology’s veneer of convenience conceals a dark truth: Quietly and very rapidly, facial recognition has enabled China to become the world’s most advanced surveillance state. A hugely ambitious new government program called the “social credit system” aims to compile unprecedented data sets, including everything from bank-account numbers to court records to internet-search histories, for all Chinese citizens. Based on this information, each person could be assigned a numerical score, to which points might be added for good behavior like winning a community award, and deducted for bad actions like failure to pay a traffic fine. The goal of the program, as stated in government documents, is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”

 

 

 

 
© 2024 | Daniel Christian