Robots and AI are going to make social inequality even worse, says new report — from theverge.com by
Rich people are going to find it easier to adapt to automation

Excerpt:

Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But what’s less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.

The are a number of reasons for this, say the report’s authors, including the ability of richer individuals to re-train for new jobs; the rising importance of “soft skills” like communication and confidence; and the reduction in the number of jobs used as “stepping stones” into professional industries.

For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.

 

Re-training for new jobs will also become a crucial skill, and it’s individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.

 

 

From DSC:
I can’t emphasize this enough. There are dangerous, tumultuous times ahead if we can’t figure out ways to help ALL people within the workforce reinvent themselves quickly, cost-effectively, and conveniently. Re-skilling/up-skilling ourselves is becoming increasingly important. And I’m not just talking about highly-educated people. I’m talking about people whose jobs are going to be disappearing in the near future — especially people whose stepping stones into brighter futures are going to wake up to a very different world. A very harsh world.

That’s why I’m so passionate about helping to develop a next generation learning platform. Higher education, as an industry, has some time left to figure out their part/contribution out in this new world. But the window of time could be closing, as another window of opportunity / era could be opening up for “the next Amazon.com of higher education.”

It’s up to current, traditional institutions of higher education as to how much they want to be a part of the solution. Some of the questions each institution ought to be asking are:

  1. Given our institutions mission/vision, what landscapes should we be pulse-checking?
  2. Do we have faculty/staff/members of administration looking at those landscapes that are highly applicable to our students and to their futures? How, specifically, are the insights from those employees fed into the strategic plans of our institution?
  3. What are some possible scenarios as a result of these changing landscapes? What would our response(s) be for each scenario?
  4. Are there obstacles from us innovating and being able to respond to the shifting landscapes, especially within the workforce?
  5. How do we remove those obstacles?
  6. On a scale of 0 (we don’t innovate at all) to 10 (highly innovative), where is our culture today? Where do we hope to be 5 years from now? How do we get there?

…and there are many other questions no doubt. But I don’t think we’re looking into the future nearly enough to see the massive needs — and real issues — ahead of us.

 

 

The report, which was carried out by the Boston Consulting Group and published this Wednesday [7/12/17], looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.

 

 

 

 

Career Pathways: Five Ways to Connect College and Careers calls for states to help students, their families, and employers unpack the meaning of postsecondary credentials and assess their value in the labor market.

Excerpt:

If students are investing more to go to college, they need to have answers to basic questions about the value of postsecondary education. They need better information to make decisions that have lifelong economic consequences.

Getting a college education is one of the biggest investments people will make in their lives, but the growing complexity of today’s economy makes it difficult for higher education to deliver efficiency and consistent quality. Today’s economy is more intricate than those of decades past.

 

From this press release:

It’s Time to Fix Higher Education’s Tower of Babel, Says Georgetown University Report
The lack of transparency around college and careers leads to costly, uninformed decisions

(Washington, D.C., July 11, 2017) — A new report from the Georgetown University Center on Education and the Workforce (Georgetown Center), Career Pathways: Five Ways to Connect College and Careers, calls for states to help students, their families, and employers unpack the meaning of postsecondary credentials and assess their value in the labor market.

Back when a high school-educated worker could find a good job with decent wages, the question was simply whether or not to go to college. That is no longer the case in today’s economy, which requires at least some college to enter the middle class. The study finds that:

  • The number of postsecondary programs of study more than quintupled between 1985 and 2010 — from 410 to 2,260;
  • The number of colleges and universities more than doubled from 1,850 to 4,720 between 1950 and 2014; and
  • The number of occupations grew from 270 in 1950 to 840 in 2010.

The variety of postsecondary credentials, providers, and online delivery mechanisms has also multiplied rapidly in recent years, underscoring the need for common, measurable outcomes.

College graduates are also showing buyer’s remorse. While they are generally happy with their decision to attend college, more than half would choose a different major, go to a different college, or pursue a different postsecondary credential if they had a chance.

The Georgetown study points out that the lack of information drives the higher education market toward mediocrity. The report argues that postsecondary education and training needs to be more closely aligned to careers to better equip learners and workers with the skills they need to succeed in the 21st century economy and close the skills gap.

The stakes couldn’t be higher for students to make the right decisions. Since 1980, tuition and fees at public four year colleges and universities have grown 19 times faster than family incomes. Students and families want — and need — to know the value they are getting for their investment.

 

 



Also see:

  • Trumping toward college transparency — from linkedin.com by Anthony Carnevale
    The perfect storm is gathering around the need to increase transparency around college and careers. And in accordance with how public policy generally comes about, it might just happen. 


 

 

 

Chatbot lawyer, which contested £7.2M in parking tickets, now offers legal help for 1,000+ topics — from arstechnica.co.uk by Sebastian Anthony
DoNotPay has expanded to cover the UK and all 50 US states. Free legal help for everyone!

Excerpt:

In total, DoNotPay now has over 1,000 separate chatbots that generate formal-sounding documents for a range of basic legal issues, such as seeking remuneration for a delayed flight or train, reporting discrimination, or asking for maternity leave. If you divide that by 51 (US and UK) you get a rough idea of how many different topics are covered. Each bot had to be hand-crafted by the British creator Joshua Browder, with the assistance of part-time and volunteer lawyers to ensure that the the documents are actually fit for purpose.

 

 

British student’s free robot lawyer can fight speeding tickets and rogue landlords — from telegraph.co.uk by Cara McGoogan

Excerpt:

A free “robot lawyer” that has overturned thousands of parking tickets in the UK can now fight rogue landlords, speeding tickets and harassment at work.

Joshua Browder, the 20-year-old British student who created the aide, has upgraded the robot’s abilities so it can fight legal disputes in 1,000 different areas. These include fighting landlords over security deposits and house repairs, and helping people report fraud to their credit card agency.

To get robot advice, users type their problem into the DoNotPay site and it directs them to a chat bot that can solve their particular legal issue. It can draft letters and offer advice on problems from credit card fraud to airline compensation.

 

 

Free robot lawyer helps low-income people tackle more than 1,000 legal issues — from mashable.com by Katie Dupere

Excerpt:

Shady businesses, you’re on notice. This robot lawyer is coming after you if you play dirty.

Noted legal aid chatbot DoNotPay just announced a massive expansion, which will help users tackle issues in 1,000 legal areas entirely for free. The new features, which launched on Wednesday, cover consumer and workplace rights, and will be available in all 50 states and the UK.

While the bot will still help drivers contest parking tickets and refugees apply for asylum, the service will now also help those who want to report harassment in the workplace or who simply want a refund on a busted toaster.

 

 



From DSC:
Whereas this type of bot is meant for external communications/assistance, we should also watch for Work Bots within an organization — dishing up real-time answers to questions that employees have about a variety of topics. I think that’s the next generation of technical communications, technical/help desk support, as well as training and development groups (at least some of the staff in those departments will likely be building these types of bots).



 

Addendum on 7/15/17:

LawGeex: Contract Review Automation

Excerpt (emphasis DSC):

The LawGeex Contract Review Automation enables anyone in your business to easily submit and receive approvals on contracts without waiting for the legal team. Our A.I. technology reads, reviews and understands your contracts, approving those that meet your legal team’s pre-defined criteria, and escalating those that don’t. Legal can maintain control and mitigate risk while giving other departments the freedom they need to get business moving.

 

 

From DSC:
With the ever increasing usage of artificial intelligence, algorithms, robotics, and automation, people are going to need to reinvent themselves quickly, cost-effectively, and conveniently. As such, we had better begin working immediately on a next generation learning platform — before the other tidal waves start hitting the beach. “What do you mean by saying ‘other tidal waves’ — what tidal waves are you talking about anyway?” one might ask.

Well….here’s one for you:


 

 

New Report Predicts Over 100,000 Legal Jobs Will Be Lost To Automation — from futurism.com by Jelor Gallego
An extensive new analysis by Deloitte estimates that over 100,000 jobs will be lost to technological automation within the next two decades. Increasing technological advances have helped replace menial roles in the office and do repetitive tasks

 


From DSC:
I realize that not all of this is doom and gloom. There will be jobs lost and there will be jobs gained. A point also made by MIT futurists Andrew McAfee and Erik Brynjolfsson in a recent podcast entitled, “
Want to stay relevant? Then listen up(in which they explain the momentous technological changes coming next–and what you can do to harness them).

But the point is that massive reinvention is going to be necessary. Traditional institutions of higher education — as well as the current methods of accreditation — are woefully inadequate to address the new, exponential pace of change.

 

 

 


 

Here’s my take on what it’s going to take to deliver constantly up-to-date streams of relevant content at an incredibly affordable price.

 


 

 

 

 

Realizing the Potential of Blockchain: A Multistakeholder Approach to the Stewardship of Blockchain and Cryptocurrencies — from the World Economic Forum

Excerpts:

Like the first generation of the internet, this second generation promises to disrupt business models and transform industries. Blockchain (also called distributed ledger), the technology enabling cryptocurrencies like bitcoin and Ethereum, is pulling us into a new era of openness, decentralization and global inclusion. It leverages the resources of a global peer-to-peer network to ensure the integrity of the value exchanged among billions of devices without going through a trusted third party. Unlike the internet alone, blockchains are distributed, not centralized; open, not hidden; inclusive, not exclusive; immutable, not alterable; and secure. Blockchain gives us unprecedented capabilities to create and trade value in society. As the foundational platform of the Fourth Industrial Revolution, it enables such innovations as artificial intelligence (AI), machine learning, the internet of things (IoT), robotics and even technology in our bodies, so that more people can participate in the economy, create wealth and improve the state of the world.

However, this extraordinary technology may be stalled, sidetracked, captured or otherwise suboptimized depending on how all the stakeholders behave in stewarding this set of resources – i.e. how it is governed.

At the overall ecosystem level, we look at the need for a proper legal structure, regulatory restraint, diversity of viewpoints and scientific research in tandem with business development. We introduce each of the eight stakeholders in the ecosystem: innovators, venture capitalists, banks and financial services, developers, academics, non-governmental organizations (NGOs), government bodies, and users or citizens.

The internet is entering a second era that’s based on blockchain. The last few decades brought us the internet of information. We are now witnessing the rise of the internet of value. Where the first era was sparked by a convergence of computing and communications technologies, this second era will be powered by a clever combination of cryptography, mathematics, software engineering and behavioural economics. It is blockchain technology, also called distributed ledger technology. Like the internet before it, the blockchain promises to upend business models and disrupt industries. It is pushing us to challenge how we have structured society, defined value and rewarded participation.

 

 

From DSC:
Institutions of higher education need to put the topic of blockchain-based technologies on their radars, as blockchain could impact how people get their credentials in the future. It could easily turn out to be the case that community colleges, colleges, and universities will join other organizations in terms of being able to offer credentials to their learners.

 

 

 

 

AIG teams with IBM to use blockchain for ‘smart’ insurance policy — from reuters.com by Suzanne Barlyn

Excerpt (emphasis DSC):

Insurer American International Group Inc has partnered with International Business Machines Corp to develop a “smart” insurance policy that uses blockchain to manage complex international coverage, the companies said on Wednesday.

AIG and IBM completed a pilot of a so-called “smart contract” multi-national policy for Standard Chartered Bank PLC which the companies said is the first of its kind using blockchain’s digital ledger technology.

IBM has been partnering with leading companies in various industries, including Danish transport company Maersk, to create blockchain-based products that can streamline complex international dealings across sectors.

 

Blockchain technology, which powers the digital currency bitcoin, enables data sharing across a network of individual computers. It has gained worldwide popularity due to its usefulness in recording and keeping track of assets or transactions across all industries.

 

 

From DSC:
Why post this item? Because IBM and others are experimenting with and investing millions into blockchain-based technologies; and because the manner in which credentials are stored and recognized will most likely be significantly impacted by blockchain-based technologies. Earlier this year at the Next Generation Learning Spaces Conference in San Diego, I mentioned that this topic of blockchain-based technologies is something that should be on our radars within higher education.

 

 

 

 

 

 

From DSC:
This type of technology could be good, or it could be bad…or, like many technologies, it could be both — depends upon how it’s used. The resources below mention some positive applications, but also some troubling applications.


 

Lyrebird claims it can recreate any voice using just one minute of sample audio — from theverge.com by James Vincent
The results aren’t 100 percent convincing, but it’s a sign of things to come

Excerpt:

Artificial intelligence is making human speech as malleable and replicable as pixels. Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

 

 

 

 

 

Also see:

 

Imitating people’s speech patterns precisely could bring trouble — from economist.com by
You took the words right out of my mouth

Excerpt:

UTTER 160 or so French or English phrases into a phone app developed by CandyVoice, a new Parisian company, and the app’s software will reassemble tiny slices of those sounds to enunciate, in a plausible simulacrum of your own dulcet tones, whatever typed words it is subsequently fed. In effect, the app has cloned your voice. The result still sounds a little synthetic but CandyVoice’s boss, Jean-Luc Crébouw, reckons advances in the firm’s algorithms will render it increasingly natural. Similar software for English and four widely spoken Indian languages, developed under the name of Festvox, by Carnegie Mellon University’s Language Technologies Institute, is also available. And Baidu, a Chinese internet giant, says it has software that needs only 50 sentences to simulate a person’s voice.

Until recently, voice cloning—or voice banking, as it was then known—was a bespoke industry which served those at risk of losing the power of speech to cancer or surgery.

More troubling, any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer.

 

 

Per Candyvoice.com:

Expert in digital voice processing, CandyVoice offers software to facilitate and improve vocal communication between people and communicating objects. With applications in:

Health
Customize your devices of augmentative and alternative vocal communication by integrating in them your users’ personal vocal model

Robots & Communicating objects
Improve communication with robots through voice conversion, customized TTS, and noise filtering

Video games
Enhance the gaming experience by integrating vocal conversion of character’s voice in real time, and the TTS customizing

 

 

Also related:

 

 

From DSC:
Given this type of technology, what’s to keep someone from cloning a voice, putting together whatever you wanted that person to say, and then making it appear that Alexa recorded that other person’s voice?

 

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

Infected Vending Machines And Light Bulbs DDoS A University — from forbes.com by Lee Mathews; with a shout out to eduwire for this resource

Excerpt:

IoT devices have become a favorite weapon of cybercriminals. Their generally substandard security — and the sheer numbers of connected devices — make them an enticing target. We’ve seen what a massive IoT botnet is capable of doing, but even a relatively small one can cause a significant amount of trouble.

A few thousand infected IoT devices can cut a university off from the Internet, according to an incident that the Verizon RISK (Research, Investigations, Solutions and Knowledge) team was asked to assist with. All the attacker had to do was re-program the devices so they would periodically try to connect to seafood-related websites.

How can that simple act grind Internet access to a halt across an entire university network? By training around 5,000 devices to send DNS queries simultaneously…

 

 

Hackers Use New Tactic at Austrian Hotel: Locking the Doors — from nytimes.com by Dan Bilefskyjan

Excerpt:

The ransom demand arrived one recent morning by email, after about a dozen guests were locked out of their rooms at the lakeside Alpine hotel in Austria.

The electronic key system at the picturesque Romantik Seehotel Jaegerwirt had been infiltrated, and the hotel was locked out of its own computer system, leaving guests stranded in the lobby, causing confusion and panic.

“Good morning?” the email began, according to the hotel’s managing director, Christoph Brandstaetter. It went on to demand a ransom of two Bitcoins, or about $1,800, and warned that the cost would double if the hotel did not comply with the demand by the end of the day, Jan. 22.

Mr. Brandstaetter said the email included details of a “Bitcoin wallet” — the account in which to deposit the money — and ended with the words, “Have a nice day!”

 

“Ransomware is becoming a pandemic,” said Tony Neate, a former British police officer who investigated cybercrime for 15 years. “With the internet, anything can be switched on and off, from computers to cameras to baby monitors.”

 

To guard against future attacks, however, he said the Romantik Seehotel Jaegerwirt was considering replacing its electronic keys with old-fashioned door locks and real keys of the type used when his great-grandfather founded the hotel. “The securest way not to get hacked,” he said, “is to be offline and to use keys.”

 

 

 

Regulation of the Internet of Things — from schneier.com by Bruce Schneier

Excerpt (emphasis DSC):

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

 

 

From DSC:
We have to do something about these security-related issues — now!  If not, you can kiss the Internet of Things goodbye — or at least I sure hope so. Don’t get me wrong. I’d like to the the Internet of Things come to fruition in many areas. However, if governments and law enforcement agencies aren’t going to get involved to fix the problems, I don’t want to see the Internet of Things take off.  The consequences of not getting this right are too huge — with costly ramifications.  As Bruce mentions in his article, it will likely take government regulation before this type of issue goes away.

 

 

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Bruce Schneier

 

 

 



Addendum on 2/15/17:

I was glad to learn of the following news today:

  • NXP Unveils Secure Platform Solution for the IoT — from finance.yahoo.com
    Excerpt:
    SAN FRANCISCO, Feb. 13, 2017 (GLOBE NEWSWIRE) — RSA Conference 2017 – Electronic security and trust are key concerns in the digital era, which are magnified as everything becomes connected in the Internet of Things (IoT). NXP Semiconductors N.V. (NXPI) today disclosed details of a secure platform for building trusted connected products. The QorIQ Layerscape Secure Platform, built on the NXP trust architecture technology, enables developers of IoT equipment to easily build secure and trusted systems. The platform provides a complete set of hardware, software and process capabilities to embed security and trust into every aspect of a product’s life cycle.Recent security breaches show that even mundane devices like web-cameras or set-top boxes can be used to both attack the Internet infrastructure and/or spy on their owners. IoT solutions cannot be secured against such misuse unless they are built on technology that addresses all aspects of a secure and trusted product lifecycle. In offering the Layerscape Secure Platform, NXP leverages decades of experience supplying secure embedded systems for military, aerospace, and industrial markets.

 

 
© 2016 Learning Ecosystems