From DSC:
This type of technology could be good, or it could be bad…or, like many technologies, it could be both — depends upon how it’s used. The resources below mention some positive applications, but also some troubling applications.


 

Lyrebird claims it can recreate any voice using just one minute of sample audio — from theverge.com by James Vincent
The results aren’t 100 percent convincing, but it’s a sign of things to come

Excerpt:

Artificial intelligence is making human speech as malleable and replicable as pixels. Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

 

 

 

 

 

Also see:

 

Imitating people’s speech patterns precisely could bring trouble — from economist.com by
You took the words right out of my mouth

Excerpt:

UTTER 160 or so French or English phrases into a phone app developed by CandyVoice, a new Parisian company, and the app’s software will reassemble tiny slices of those sounds to enunciate, in a plausible simulacrum of your own dulcet tones, whatever typed words it is subsequently fed. In effect, the app has cloned your voice. The result still sounds a little synthetic but CandyVoice’s boss, Jean-Luc Crébouw, reckons advances in the firm’s algorithms will render it increasingly natural. Similar software for English and four widely spoken Indian languages, developed under the name of Festvox, by Carnegie Mellon University’s Language Technologies Institute, is also available. And Baidu, a Chinese internet giant, says it has software that needs only 50 sentences to simulate a person’s voice.

Until recently, voice cloning—or voice banking, as it was then known—was a bespoke industry which served those at risk of losing the power of speech to cancer or surgery.

More troubling, any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer.

 

 

Per Candyvoice.com:

Expert in digital voice processing, CandyVoice offers software to facilitate and improve vocal communication between people and communicating objects. With applications in:

Health
Customize your devices of augmentative and alternative vocal communication by integrating in them your users’ personal vocal model

Robots & Communicating objects
Improve communication with robots through voice conversion, customized TTS, and noise filtering

Video games
Enhance the gaming experience by integrating vocal conversion of character’s voice in real time, and the TTS customizing

 

 

Also related:

 

 

From DSC:
Given this type of technology, what’s to keep someone from cloning a voice, putting together whatever you wanted that person to say, and then making it appear that Alexa recorded that other person’s voice?

 

 

 

 

Making sure the machines don’t take over — from raconteur.net by Mark Frary
Preparing economic players for the impact of artificial intelligence is a work in progress which requires careful handling

 

From DSC:
This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.

Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.

 

Also see:

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

Infected Vending Machines And Light Bulbs DDoS A University — from forbes.com by Lee Mathews; with a shout out to eduwire for this resource

Excerpt:

IoT devices have become a favorite weapon of cybercriminals. Their generally substandard security — and the sheer numbers of connected devices — make them an enticing target. We’ve seen what a massive IoT botnet is capable of doing, but even a relatively small one can cause a significant amount of trouble.

A few thousand infected IoT devices can cut a university off from the Internet, according to an incident that the Verizon RISK (Research, Investigations, Solutions and Knowledge) team was asked to assist with. All the attacker had to do was re-program the devices so they would periodically try to connect to seafood-related websites.

How can that simple act grind Internet access to a halt across an entire university network? By training around 5,000 devices to send DNS queries simultaneously…

 

 

Hackers Use New Tactic at Austrian Hotel: Locking the Doors — from nytimes.com by Dan Bilefskyjan

Excerpt:

The ransom demand arrived one recent morning by email, after about a dozen guests were locked out of their rooms at the lakeside Alpine hotel in Austria.

The electronic key system at the picturesque Romantik Seehotel Jaegerwirt had been infiltrated, and the hotel was locked out of its own computer system, leaving guests stranded in the lobby, causing confusion and panic.

“Good morning?” the email began, according to the hotel’s managing director, Christoph Brandstaetter. It went on to demand a ransom of two Bitcoins, or about $1,800, and warned that the cost would double if the hotel did not comply with the demand by the end of the day, Jan. 22.

Mr. Brandstaetter said the email included details of a “Bitcoin wallet” — the account in which to deposit the money — and ended with the words, “Have a nice day!”

 

“Ransomware is becoming a pandemic,” said Tony Neate, a former British police officer who investigated cybercrime for 15 years. “With the internet, anything can be switched on and off, from computers to cameras to baby monitors.”

 

To guard against future attacks, however, he said the Romantik Seehotel Jaegerwirt was considering replacing its electronic keys with old-fashioned door locks and real keys of the type used when his great-grandfather founded the hotel. “The securest way not to get hacked,” he said, “is to be offline and to use keys.”

 

 

 

Regulation of the Internet of Things — from schneier.com by Bruce Schneier

Excerpt (emphasis DSC):

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

 

 

From DSC:
We have to do something about these security-related issues — now!  If not, you can kiss the Internet of Things goodbye — or at least I sure hope so. Don’t get me wrong. I’d like to the the Internet of Things come to fruition in many areas. However, if governments and law enforcement agencies aren’t going to get involved to fix the problems, I don’t want to see the Internet of Things take off.  The consequences of not getting this right are too huge — with costly ramifications.  As Bruce mentions in his article, it will likely take government regulation before this type of issue goes away.

 

 

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Bruce Schneier

 

 

 



Addendum on 2/15/17:

I was glad to learn of the following news today:

  • NXP Unveils Secure Platform Solution for the IoT — from finance.yahoo.com
    Excerpt:
    SAN FRANCISCO, Feb. 13, 2017 (GLOBE NEWSWIRE) — RSA Conference 2017 – Electronic security and trust are key concerns in the digital era, which are magnified as everything becomes connected in the Internet of Things (IoT). NXP Semiconductors N.V. (NXPI) today disclosed details of a secure platform for building trusted connected products. The QorIQ Layerscape Secure Platform, built on the NXP trust architecture technology, enables developers of IoT equipment to easily build secure and trusted systems. The platform provides a complete set of hardware, software and process capabilities to embed security and trust into every aspect of a product’s life cycle.Recent security breaches show that even mundane devices like web-cameras or set-top boxes can be used to both attack the Internet infrastructure and/or spy on their owners. IoT solutions cannot be secured against such misuse unless they are built on technology that addresses all aspects of a secure and trusted product lifecycle. In offering the Layerscape Secure Platform, NXP leverages decades of experience supplying secure embedded systems for military, aerospace, and industrial markets.

 

 

Code-Dependent: Pros and Cons of the Algorithm Age — from pewinternet.org by Lee Rainie and Janna Anderson
Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms. Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Algorithms are often elegant and incredibly useful tools used to accomplish tasks. They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences. Recent news items tie to these concerns…

 

The use of algorithms is spreading as massive amounts of data are being created, captured and analyzed by businesses and governments. Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace.

 

 

 

 

 

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

From DSC:
First, some items regarding the enormous emphasis being put towards the use of robotics and automation:

  • $18.867 billion paid to acquire 50 robotics companies in 2016 — from robohub.org by Frank Tobe
    Excerpt:
    2016 was a banner year for acquisitions of companies involved in robotics and automation: 50 sold; 11 for amounts over $500 million; five were over a billion. 30 of the 50 companies disclosed transaction amounts which totaled up to a colossal $18.867 billion!
    .
  • 2017: The year people are forced to learn new skills… or join the Lost Generation — from enterpriseirregulars.com by Phil Fersht
    Excerpt (emphasis DSC):
    Let’s cut to the chase – there have never been times as uncertain as these in the world of business. There is no written rule-book to follow when it comes to career survival. The “Future of Work” is about making ourselves employable in a workforce where the priority of business leaders is to invest in automation and digital technology, more than training and developing their own workforces. As our soon-to-be-released State of Operations and Outsourcing 2017 study, conducted in conjunction with KPMG across 454 major enterprise buyers globally, shows a dramatic shift in priorities from senior managers (SVPs and above), where 43% are earmarking significant investment in robotic automation of processes, compared with only 28% placing a similar emphasis on training and change management. In fact, the same number of senior managers are as focused on cognitive computing as their own people… yes, folks, this is the singularity of enterprise operations, where cognitive computing now equals employees’ brains when it comes to investment!

    My deep-seated fear for today’s workforce is that we’re in danger of becoming this “Lost Generation” of workers if we persist in relying on what we already know, versus avoiding learning new skills that business leaders now need. We have to become students again, put our egos aside, and broaden our capabilities to avoid the quicksand of legacy executives no longer worth employing.

 

 

 

Below are some other resources along these lines:

 

From DSC:
Given that these trends continue (i.e., to outsource work to software and to robots), what will the ramifications be for:

  • Society at large? Will enough people have enough income to purchase the products/services made by the robots and the software?
  • Will there be major civil unrest / instability? Will crime rates shoot through the roof as peoples’ desperation and frustration escalate?
  • How we should change our curricula within K-12?
  • How should we change our curricular within higher education?
  • How should corporate training & development departments/groups respond to these trends?
  • Is there some new criteria that we need to use (or increase the usage of) in selecting C-level executives?

People don’t want to hear about it. But if the only thing that the C-level suites out there care about is maximizing profits and minimizing costs — REGARDLESS of what happens to humankind — then we are likely going to be creating a very dangerous future. Capitalism will have gone awry. (By the way, the C-level suite is probably making their decisions based upon how their performance is judged by Wall Street and by shareholders. So I can’t really put all the blame on them. Perhaps the enemy is ourselves…?) 

Bottom line: We need to be careful which technologies we implement — and how they are implemented. We need to create a dream in our futures, not a nightmare. We need people at the helms who care about their fellow humankind, and who use the power of these technologies responsibly.

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 
© 2016 Learning Ecosystems