Robots and AI are going to make social inequality even worse, says new report — from theverge.com by
Rich people are going to find it easier to adapt to automation

Excerpt:

Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But what’s less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.

The are a number of reasons for this, say the report’s authors, including the ability of richer individuals to re-train for new jobs; the rising importance of “soft skills” like communication and confidence; and the reduction in the number of jobs used as “stepping stones” into professional industries.

For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.

 

Re-training for new jobs will also become a crucial skill, and it’s individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.

 

 

From DSC:
I can’t emphasize this enough. There are dangerous, tumultuous times ahead if we can’t figure out ways to help ALL people within the workforce reinvent themselves quickly, cost-effectively, and conveniently. Re-skilling/up-skilling ourselves is becoming increasingly important. And I’m not just talking about highly-educated people. I’m talking about people whose jobs are going to be disappearing in the near future — especially people whose stepping stones into brighter futures are going to wake up to a very different world. A very harsh world.

That’s why I’m so passionate about helping to develop a next generation learning platform. Higher education, as an industry, has some time left to figure out their part/contribution out in this new world. But the window of time could be closing, as another window of opportunity / era could be opening up for “the next Amazon.com of higher education.”

It’s up to current, traditional institutions of higher education as to how much they want to be a part of the solution. Some of the questions each institution ought to be asking are:

  1. Given our institutions mission/vision, what landscapes should we be pulse-checking?
  2. Do we have faculty/staff/members of administration looking at those landscapes that are highly applicable to our students and to their futures? How, specifically, are the insights from those employees fed into the strategic plans of our institution?
  3. What are some possible scenarios as a result of these changing landscapes? What would our response(s) be for each scenario?
  4. Are there obstacles from us innovating and being able to respond to the shifting landscapes, especially within the workforce?
  5. How do we remove those obstacles?
  6. On a scale of 0 (we don’t innovate at all) to 10 (highly innovative), where is our culture today? Where do we hope to be 5 years from now? How do we get there?

…and there are many other questions no doubt. But I don’t think we’re looking into the future nearly enough to see the massive needs — and real issues — ahead of us.

 

 

The report, which was carried out by the Boston Consulting Group and published this Wednesday [7/12/17], looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.

 

 

 

 

AI is making it extremely easy for students to cheat — from wired.com by Pippa Biddle

Excerpt (emphasis DSC):

For years, students have turned to CliffsNotes for speedy reads of books, SparkNotes to whip up talking points for class discussions, and Wikipedia to pad their papers with historical tidbits. But today’s students have smarter tools at their disposal—namely, Wolfram|Alpha, a program that uses artificial intelligence to perfectly and untraceably solve equations. Wolfram|Alpha uses natural language processing technology, part of the AI family, to provide students with an academic shortcut that is faster than a tutor, more reliable than copying off of friends, and much easier than figuring out a solution yourself.

 

Use of Wolfram|Alpha is difficult to trace, and in the hands of ambitious students, its perfect solutions are having unexpected consequences.

 

 

 

 
 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

AI will make forging anything entirely too easy — from wired.com by Greg Allen

Excerpt:

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

 

Also referenced in the above article:

 

 

 

 

Web host agrees to pay $1m after it’s hit by Linux-targeting ransomware — from arstechnica.com by Dan Goodin
Windfall payment by poorly secured host is likely to inspire new ransomware attacks.

Excerpt (emphasis above and below by DSC):

A Web-hosting service recently agreed to pay $1 million to a ransomware operation that encrypted data stored on 153 Linux servers and 3,400 customer websites, the company said recently.

The South Korean Web host, Nayana, said in a blog post published last week that initial ransom demands were for five billion won worth of Bitcoin, which is roughly $4.4 million. Company negotiators later managed to get the fee lowered to 1.8 billion won and ultimately landed a further reduction to 1.2 billion won, or just over $1 million. An update posted Saturday said Nayana engineers were in the process of recovering the data. The post cautioned that that the recovery was difficult and would take time.

 

 

 

From DSC:
This type of technology could be good, or it could be bad…or, like many technologies, it could be both — depends upon how it’s used. The resources below mention some positive applications, but also some troubling applications.


 

Lyrebird claims it can recreate any voice using just one minute of sample audio — from theverge.com by James Vincent
The results aren’t 100 percent convincing, but it’s a sign of things to come

Excerpt:

Artificial intelligence is making human speech as malleable and replicable as pixels. Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

 

 

 

 

 

Also see:

 

Imitating people’s speech patterns precisely could bring trouble — from economist.com by
You took the words right out of my mouth

Excerpt:

UTTER 160 or so French or English phrases into a phone app developed by CandyVoice, a new Parisian company, and the app’s software will reassemble tiny slices of those sounds to enunciate, in a plausible simulacrum of your own dulcet tones, whatever typed words it is subsequently fed. In effect, the app has cloned your voice. The result still sounds a little synthetic but CandyVoice’s boss, Jean-Luc Crébouw, reckons advances in the firm’s algorithms will render it increasingly natural. Similar software for English and four widely spoken Indian languages, developed under the name of Festvox, by Carnegie Mellon University’s Language Technologies Institute, is also available. And Baidu, a Chinese internet giant, says it has software that needs only 50 sentences to simulate a person’s voice.

Until recently, voice cloning—or voice banking, as it was then known—was a bespoke industry which served those at risk of losing the power of speech to cancer or surgery.

More troubling, any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer.

 

 

Per Candyvoice.com:

Expert in digital voice processing, CandyVoice offers software to facilitate and improve vocal communication between people and communicating objects. With applications in:

Health
Customize your devices of augmentative and alternative vocal communication by integrating in them your users’ personal vocal model

Robots & Communicating objects
Improve communication with robots through voice conversion, customized TTS, and noise filtering

Video games
Enhance the gaming experience by integrating vocal conversion of character’s voice in real time, and the TTS customizing

 

 

Also related:

 

 

From DSC:
Given this type of technology, what’s to keep someone from cloning a voice, putting together whatever you wanted that person to say, and then making it appear that Alexa recorded that other person’s voice?

 

 

 

 

Making sure the machines don’t take over — from raconteur.net by Mark Frary
Preparing economic players for the impact of artificial intelligence is a work in progress which requires careful handling

 

From DSC:
This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.

Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.

 

Also see:

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 
© 2016 Learning Ecosystems