Luke 10:25-37 New International Version (NIV) — from biblegateway.com
The Parable of the Good Samaritan

25 On one occasion an expert in the law stood up to test Jesus. “Teacher,” he asked, “what must I do to inherit eternal life?”

26 “What is written in the Law?” he replied. “How do you read it?”

27 He answered, “‘Love the Lord your God with all your heart and with all your soul and with all your strength and with all your mind’; and, ‘Love your neighbor as yourself.’”

28 “You have answered correctly,” Jesus replied. “Do this and you will live.”

29 But he wanted to justify himself, so he asked Jesus, “And who is my neighbor?”

30 In reply Jesus said: “A man was going down from Jerusalem to Jericho, when he was attacked by robbers. They stripped him of his clothes, beat him and went away, leaving him half dead. 31 A priest happened to be going down the same road, and when he saw the man, he passed by on the other side. 32 So too, a Levite, when he came to the place and saw him, passed by on the other side.33 But a Samaritan, as he traveled, came where the man was; and when he saw him, he took pity on him. 34 He went to him and bandaged his wounds, pouring on oil and wine. Then he put the man on his own donkey, brought him to an inn and took care of him. 35 The next day he took out two denarii and gave them to the innkeeper. ‘Look after him,’ he said, ‘and when I return, I will reimburse you for any extra expense you may have.’

36 “Which of these three do you think was a neighbor to the man who fell into the hands of robbers?”

37 The expert in the law replied, “The one who had mercy on him.”

Jesus told him, “Go and do likewise.”

 

From DSC:
The Samaritan had to sacrifice something here — time and money come to mind, but also, as our pastor said the other day, the Samaritan took an enormous risk caring for this wounded man. The Samaritan himself could have been beaten up (or worse) back in that time.

 

 

 

Why emerging technology needs to retain a human element — from forbes.com by Samantha Radocchia
Technology opens up new, unforeseen issues. And humans are necessary for solving the problems automated services can’t.

Excerpt (emphasis DSC):

With technological advancements comes change. Rather than avoiding new technology for as long as possible, and then accepting the inevitable, people need to be actively thinking about how it will change us as individuals and as a society.

Take your phone for instance. The social media, gaming and news apps are built to keep you addicted so companies can collect data on you. They’re designed to be used constantly so you back for more the instant you feel the slightest twinge of boredom.

And yet, other apps—sometimes the same ones I just mentioned—allow you to instantly communicate with people around the world. Loved ones, colleagues, old friends—they’re all within reach now.

Make any technology decisions carefully, because their impact down the road may be tremendous.

This is part of the reason why there’s been a push lately for ethics to be a required part of any computer science or vocational training program. And it makes sense. If people want to create ethical systems, there’s a need to remember that actual humans are behind them. People make bad choices sometimes. They make mistakes. They aren’t perfect.

 

To ignore the human element in tech is to miss the larger point: Technology should be about empowering people to live their best lives, not making them fearful of the future.

 

 

 

 

About Law2020: The Podcast
Last month we launched the Law2020 podcast, an audio companion to Law2020, our four-part series of articles about how artificial intelligence and similar emerging technologies are reshaping the practice and profession of law. The podcast episodes and featured guests are as follows:

  1. Access to JusticeDaniel Linna, Professor of Law in Residence and the Director of LegalRnD – The Center for Legal Services Innovation at Michigan State University College of Law.
  2. Legal EthicsMegan Zavieh, ethics and state bar defense lawyer.
  3. Legal ResearchDon MacLeod, Manager of Knowledge Management at Debevoise & Plimpton and author of How To Find Out Anything and The Internet Guide for the Legal Researcher.
  4. Legal AnalyticsAndy Martens, SVP & Global Head Legal Product and Editorial at Thomson Reuters.

The podcasts are short and lively, and we hope you’ll give them a listen. And if you haven’t done so already, we invite you to read the full feature stories over at the Law2020 website. Enjoy!

Listen to Law2020 Podcast

 

Activists urge killer robot ban ‘before it is too late’ — from techxplore.com by Nina Larson

Excerpt:

Countries should quickly agree a treaty banning the use of so-called killer robots “before it is too late”, activists said Monday as talks on the issue resumed at the UN.

They say time is running out before weapons are deployed that use lethal force without a human making the final kill-order and have criticised the UN body hosting the talks—the Convention of Certain Conventional Weapons (CCW)—for moving too slowly.

“Killer robots are no longer the stuff of science fiction,” Rasha Abdul Rahim, Amnesty International’s advisor on artificial intelligence and human rights, said in a statement.

“From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are far outpacing international law,” she said.

 

Activists urge killer robot ban before it is too late

 

From DSC:
I’ve often considered how much out front many technologies are in our world today. It takes the rest of society some time to catch up with emerging technologies and ask whether we should be implementing technology A, B, or C.  Just because we can, doesn’t mean we should. A worn-out statement perhaps, but given the exponential pace of technological change, one that is highly relevant to our world today.

 

 



Addendum on 9/8/18:



 

 

Smart Machines & Human Expertise: Challenges for Higher Education — from er.educause.edu by Diana Oblinger

Excerpts:

What does this mean for higher education? One answer is that AI, robotics, and analytics become disciplines in themselves. They are emerging as majors, minors, areas of emphasis, certificate programs, and courses in many colleges and universities. But smart machines will catalyze even bigger changes in higher education. Consider the implications in three areas: data; the new division of labor; and ethics.

 

Colleges and universities are challenged to move beyond the use of technology to deliver education. Higher education leaders must consider how AI, big data, analytics, robotics, and wide-scale collaboration might change the substance of education.

 

Higher education leaders should ask questions such as the following:

  • What place does data have in our courses?
  • Do students have the appropriate mix of mathematics, statistics, and coding to understand how data is manipulated and how algorithms work?
  • Should students be required to become “data literate” (i.e., able to effectively use and critically evaluate data and its sources)?

Higher education leaders should ask questions such as the following:

  • How might problem-solving and discovery change with AI?
  • How do we optimize the division of labor and best allocate tasks between humans and machines?
  • What role do collaborative platforms and collective intelligence have in how we develop and deploy expertise?


Higher education leaders should ask questions such as the following:

  • Even though something is possible, does that mean it is morally responsible?
  • How do we achieve a balance between technological possibilities and policies that enable—or stifle—their use?
  • An algorithm may represent a “trade secret,” but it might also reinforce dangerous assumptions or result in unconscious bias. What kind of transparency should we strive for in the use of algorithms?

 

 

 

It’s time to address artificial intelligence’s ethical problems — from wired.co.uk by Abigail Beall
AI is already helping us diagnose cancer and understand climate change, but regulation and oversight are needed to stop the new technology being abused

Excerpt:

The potential for AI to do good is immense, says Taddeo. Technology using artificial intelligence will have the capability to tackle issues “from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality, and appalling living standards,” she says.

Yet AI is not without its problems. In order to ensure it can do good, we first have to understand the risks.

The potential problems that come with artificial intelligence include a lack of transparency about what goes into the algorithms. For example, an autonomous vehicle developed by researchers at the chip maker Nvidia went on the roads in 2016, without anyone knowing how it made its driving decisions.

 

 

Responsibility & AI: ‘We all have a role when it comes to shaping the future’ — from re-work.co by Fiona McEvoy

Excerpt:

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

For reasons that are somewhat understandable, at present much of this tech ethics talk happens behind closed doors, and typically only engages a handful of industry and academic voices. Currently, these elite figures are the only participants in a dialogue that will determine all of our futures. At least in part, I started YouTheData.com because I wanted to bring “ivory tower” discussions down to the level of the engaged consumer, and be part of efforts to democratize this particular consultation process. As a former campaigner, I place a lot of value in public awareness and scrutiny.

To be clear, the message I wish to convey is not a criticism of the worthy academic and advisory work being done in this field (indeed, I have some small hand in this myself). It’s about acknowledging that engineers, technologists – and now ethicists, philosophers and others – still ultimately need public assent and a level of consumer “buy in” that is only really possible when complex ideas are made more accessible.

 

 

Digital Surgery’s AI platform guides surgical teams through complex procedures — from venturebeat.com by Kyle Wiggers

Excerpt:

Digital Surgery, a health tech startup based in London, today launched what it’s calling the world’s first dynamic artificial intelligence (AI) system designed for the operating room. The reference tool helps support surgical teams through complex medical procedures — cofounder and former plastic surgeon Jean Nehme described it as a “Google Maps” for surgery.

“What we’ve done is applied artificial intelligence … to procedures … created with surgeons globally,” he told VentureBeat in a phone interview. “We’re leveraging data with machine learning to build a [predictive] system.”

 

 

Why business Lleaders need to embrace artificial intelligence — from thriveglobal.com by Howard Yu
How companies should work with AI—not against it.

 

 

 

 

The title of this article is: Schools can not get facial recognition tech for free. Should they?

Schools can not get facial recognition tech for free. Should they? — from wired.com by Issie Lapowsky

Excerpt:

Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems. So far, one school in Seattle, which Glaser’s kids attend, is testing the tool and the state of Wyoming is designing a pilot program that could launch later this year. “We feel like we’re hitting something there can be a social consensus around: that using facial recognition technology to make schools safer is a good thing,” Glaser says.

 

From DSC:
Personally, I’m very uncomfortable with where facial recognition is going in some societies. What starts off being sold as being helpful for this or that application, can quickly be abused and used to control its citizens. For example, look at what’s happening in China already these days!

The above article talks about these techs being used in schools. Based upon history, I seriously question whether humankind can wisely handle the power of these types of technologies.

Here in the United States, I already sense a ton of cameras watching each of us all the time when we’re out in public spaces (such as when we are in grocery stores, or gas stations, or in restaurants or malls, etc.).  What’s the unspoken message behind those cameras?  What’s being stated by their very presence around us?

No. I don’t like the idea of facial recognition being in schools. I’m not comfortable with this direction. I can see the counter argument — that this tech could help reduce school shootings. But I think that’s a weak argument, as someone mentally unbalanced enough to be involved with a school shooting likely won’t be swayed/deterred by being on camera. In fact, one could argue that in some cases, being on the national news — with their face being plastered all over the nation — might even put gas on the fire.

 

 

Glaser, for one, welcomes federal oversight of this space. He says it’s precisely because of his views on privacy that he wants to be part of what is bound to be a long conversation about the ethical deployment of facial recognition. “This isn’t just sci-fi. This is becoming something we, as a society, have to talk about,” he says. “That means the people who care about these issues need to get involved, not just as hand-wringers but as people trying to provide solutions. If the only people who are providing facial recognition are people who don’t give a &*&% about privacy, that’s bad.”

 

 

 

The title of this article being linked here is: Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras
Per this week’s Next e-newsletter from edsurge.com

Take the University of San Francisco, which deploys facial recognition software in its dormitories. Students still use their I.D. card to swipe in, according to Edscoop, but the face of every person who enters a dorm is scanned and run through a database, and alerts the dorm attendant when an unknown person is detected. Online students are not immune: the technology is also used in many proctoring tools for virtual classes.

The tech raises plenty of tough issues. Facial-recognition systems have been shown to misidentify young people, people of color and women more often than white men. And then there are the privacy risks: “All collected data is at risk of breach or misuse by external and internal actors, and there are many examples of misuse of law enforcement data in other contexts,” a white paper by the Electronic Frontier foundation reads.

It’s unclear whether such facial-scanners will become common at the gates of campus. But now that cost is no longer much of an issue for what used to be an idea found only in science fiction, it’s time to weigh the pros and cons of what such a system really means in practice.

 

 

Also see:

  • As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation — from techcrunch.com by Jonathan Shieber
    Excerpt:
    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own. And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created. That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

 

 

 

 

Inside China’s Dystopian Dreams: A.I., Shame and Lots of Cameras — from nytimes.com by Paul Mozur

Excerpts:

ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.

In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.

In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.

With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.

 

In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.

Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.

 

 

A very slippery slope has now been setup in China with facial recognition infrastructures

 

From DSC:
A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?  

Very troubling stuff here….

 

 

 

Welcome to Law2020: Artificial Intelligence and the Legal Profession — from abovethelaw.com by David Lat and Brian Dalton
What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world?

Excerpt:

Artificial intelligence has been declared “[t]he most important general-purpose technology of our era.” It should come as no surprise to learn that AI is transforming the legal profession, just as it is changing so many other fields of endeavor.

What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world? Will AI automate the work of attorneys — or will it instead augment, helping lawyers to work more efficiently, effectively, and ethically?

 

 

 

 

How artificial intelligence is transforming the world — from brookings.edu by Darrell M. West and John R. Allen

Summary

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents

I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion


In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

 

 

Seven Artificial Intelligence Advances Expected This Year  — from forbes.com

Excerpt:

Artificial intelligence (AI) has had a variety of targeted uses in the past several years, including self-driving cars. Recently, California changed the law that required driverless cars to have a safety driver. Now that AI is getting better and able to work more independently, what’s next?

 

 

Google Cofounder Sergey Brin Warns of AI’s Dark Side — from wired.com by Tom Simonite

Excerpt (emphasis DSC):

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

 

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

 

 

 

 

Europe divided over robot ‘personhood’ — from politico.eu by Janosch Delcker

Excerpt:

BERLIN — Think lawsuits involving humans are tricky? Try taking an intelligent robot to court.

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

 

 

AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

SXSW 2018: Key trends — from jwtintelligence.com by Marie Stafford w/ contributions by Sarah Holbrook

Excerpt:

Ethics & the Big Tech Backlash
What a difference a week makes. As the Cambridge Analytica scandal broke last weekend, the curtain was already coming down on SXSW. Even without this latest bombshell, the discussion around ethics in technology was animated, with more than 10 panels devoted to the theme. From misinformation to surveillance, from algorithmic bias to the perils of artificial intelligence (hi Elon!) speakers grappled with the weighty issue of how to ensure technology works for the good of humanity.

The Human Connection
When technology provokes this much concern, it’s perhaps natural that people should seek respite in human qualities like empathy, understanding and emotional connection.

In a standout keynote, couples therapist Esther Perel gently berated the SXSW audience for neglecting to focus on human relationships. “The quality of your relationships,” she said, “is what determines the quality of your life.

 

 

 

 

With great tech success, comes even greater responsibility — from techcrunch.com by Ron Miller

Excerpts:

As we watch major tech platforms evolve over time, it’s clear that companies like Facebook, Apple, Google and Amazon (among others) have created businesses that are having a huge impact on humanity — sometimes positive and other times not so much.

That suggests that these platforms have to understand how people are using them and when they are trying to manipulate them or use them for nefarious purposes — or the companies themselves are. We can apply that same responsibility filter to individual technologies like artificial intelligence and indeed any advanced technologies and the impact they could possibly have on society over time.

We can be sure that Twitter’s creators never imagined a world where bots would be launched to influence an election when they created the company more than a decade ago. Over time though, it becomes crystal clear that Twitter, and indeed all large platforms, can be used for a variety of motivations, and the platforms have to react when they think there are certain parties who are using their networks to manipulate parts of the populace.

 

 

But it’s up to the companies who are developing the tech to recognize the responsibility that comes with great economic success or simply the impact of whatever they are creating could have on society.

 

 

 

 

Why the Public Overlooks and Undervalues Tech’s Power — from morningconsult.com by Joanna Piacenza
Some experts say the tech industry is rapidly nearing a day of reckoning

Excerpts:

  • 5% picked tech when asked which industry had the most power and influence, well behind the U.S. government, Wall Street and Hollywood.
  • Respondents were much more likely to say sexual harassment was a major issue in Hollywood (49%) and government (35%) than in Silicon Valley (17%).

It is difficult for Americans to escape the technology industry’s influence in everyday life. Facebook Inc. reports that more than 184 million people in the United States log on to the social network daily, or roughly 56 percent of the population. According to the Pew Research Center, nearly three-quarters (73 percent) of all Americans and 94 percent of Americans ages 18-24 use YouTube. Amazon.com Inc.’s market value is now nearly three times that of Walmart Inc.

But when asked which geographic center holds the most power and influence in America, respondents in a recent Morning Consult survey ranked the tech industry in Silicon Valley far behind politics and government in Washington, finance on Wall Street and the entertainment industry in Hollywood.

 

 

 

 

Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

 

From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.

 

Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

 

 

To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy

Excerpt:

The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.

 

 

Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick

Excerpt:

Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.

 

“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”

 

So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.

 

 

Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?

 

 

Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite

Excerpt:

What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.

 

 

How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly

Excerpt:

WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.

 

 

Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt:

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

 

 

 

 
© 2024 | Daniel Christian