Amazon’s new bricks-&-mortar bookstore nails what the web couldn’t — from hackernoon.com by Pat Ryan

or

A title from DSC:
How Amazon uses its vast data resources to reinvent the bookstore

 

Excerpt (emphasis DSC):

Amazon’s First Foray into Physical Retail — While Utilitarian — Takes Discovery to New Levels
As a long time city dweller living in a neighborhood full of history, I had mixed feelings about the arrival of Amazon’s first bricks-and-mortar bookstore in a city neighborhood (the first four are located in malls). Like most of my neighbors around Chicago’s Southport Corridor, I prefer the charm of owner operated boutiques. Yet as a tech entrepreneur who holds Amazon founder Jeff Bezos in the highest esteem, I was excited to see how Amazon would reimagine the traditional bookstore given their customer obsession and their treasure trove of user data. Here’s what I discovered…

The Bottom Line:
I will still go to Amazon.com for the job of ordering a book that I already know that I want (and to the local Barnes and Noble if I need it today). But when I need to discover a book for gifts (Father’s Day is coming up soon enough) or for my own interest, nothing that I have seen compares to Amazon Books. We had an amazing experience and discovered more books in 20 minutes than we had in the past month or two.

 

 

The physical manifestation of the “if you like…then you’ll love…”

 

 

 

The ultra metric combining insights from disparate sources seems more compelling than standard best seller lists

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

Retailers cut tens of thousands of jobs. Again. — from money.cnn.com by Paul R. La Monica
The dramatic reshaping of the American retail industry has, unfortunately, led to massive job losses in the sector.

Excerpt (emphasis DSC):

The federal government said Friday that retailers shed nearly 30,000 jobs in March. That follows a more than 30,000 decline in the number of retail jobs in the previous month.

So-called general merchandise stores are hurting the most.

That part of the sector, which includes struggling companies like Macy’s, Sears, and J.C. Penney, lost 35,000 jobs last month. Nearly 90,000 jobs have been eliminated since last October.

“There is no question that the Amazon effect is overwhelming,” said Scott Clemons, chief investment strategist of private banking for BBH. “There has been a shift in the way we buy things as opposed to a shift in the amount of money spent.”

To that end, Amazon just announced plans to hire 30,000 part-time workers.

 

From DSC:
One of the reasons that I’m posting this item is for those who say disruption isn’t real…it’s only a buzz word…

A second reason that I’m posting this item is because those of us working within higher education should take note of the changes in the world of retail and learn the lesson now before the “Next Amazon.com of Higher Education*” comes on the scene. Though this organization has yet to materialize, the pieces of its foundation are beginning to come together — such as the ingredients, trends, and developments that I’ve been tracking in my “Learning from the Living [Class] Room” vision.

This new organization will be highly disruptive to institutions of traditional higher education.

If you were in an influential position at Macy’s, Sears, and/or at J.C. Penney today, and you could travel back in time…what would you do?

We in higher education have the luxury of learning from what’s been happening in the retail business. Let’s be sure to learn our lesson.

 



 

* Effective today, what I used to call the “Forthcoming Walmart of Education — which has already been occurring to some degree with things such as MOOCs and collaborations/partnerships such as Georgia Institute of Technology, Udacity, and AT&T — I now call the “Next Amazon.com of Higher Education.”

Cost. Convenience. Selection. Offering a service on-demand (i.e., being quick, responsive, and available 24×7). <– These all are powerful forces.

 



 

P.S. Some will say you can’t possibly compare the worlds of retail and higher education — and that may be true as of 2017. However, if:

  • the costs of higher education keep going up and we continue to turn a deaf ear to the struggling families/students/adult learners/etc. out there
  • alternatives to traditional higher education continue to come on the landscape
  • the Federal Government continues to be more open to financially supporting such alternatives
  • technologies such as artificial intelligence, machine learning, deep learning continue to get better and more powerful — to the point that they can effectively deliver a personalized education (one that is likely to be fully online and that utilizes a team of specialists to create and deliver the learning experiences)
  • people lose their jobs to artificial intelligence, robotics, and automation and need to quickly reinvent themselves

…I can assure you that people will find other ways to make ends meet. The Next Amazon.com of Education will be just what they are looking for.

 



 

 

 

The Hidden Costs of Active Learning — from by Thomas Mennella
Flipped and active learning truly are a better way for students to learn, but they also may be a fast track to instructor burnout.

Excerpt:

The time has come for us to have a discussion about the hidden cost of active learning in higher education. Soon, gone will be the days of instructors arriving to a lecture hall, delivering a 75-minute speech and leaving. Gone will be the days of midterms and finals being the sole forms of assessing student learning. For me, these days have already passed, and good riddance. These are largely ineffective teaching and learning strategies. Today’s college classroom is becoming dynamic, active and student-centered. Additionally, the learning never stops because the dialogue between student and instructor persists endlessly over the internet. Trust me when I say that this can be exhausting. With constant ‘touch-points,’ ‘personalized learning opportunities’ and the like, the notion of a college instructor having 12 contact hours per week that even remotely total 12 hours is beyond unreasonable.

We need to reevaluate how we measure, assign and compensate faculty teaching loads within an active learning framework. We need to recognize that instructors teaching in these innovative ways are doing more, and spending more hours, than their more traditional colleagues. And we must accept that a failure to recognize and remedy these ‘new normals’ risks burning out a generation of dedicated and passionate instructors. Flipped learning works and active learning works, but they’re very challenging ways to teach. I still say I will never teach another way again … I’m just not sure for how much longer that can be.

 

From DSC:
The above article prompted me to revisit the question of how we might move towards using more team-based approaches…? Thomas Mennella seems to be doing an incredible job — but grading 344 assignments each week or 3,784 assignments this semester is most definitely a recipe for burnout.

Then, pondering this situation, an article came to my mind that discusses Thomas Frey’s prediction that the largest internet-based company of 2030 will be focused on education.

I wondered…who will be the Amazon.com of the future of education? 

Such an organization will likely utilize a team-based approach to create and deliver excellent learning experiences — and will also likely leverage the power of artificial intelligence/machine learning/deep learning as a piece of their strategy.

 

 

 

 

 

 

The Best Amazon Alexa Skills — from in.pcmag.com by Eric Griffith

Example skills:

 

WebMD

 

 

5 Alexa skills to try this week — from venturebeat.com by Khari Johnson

Excerpt:

Below are five noteworthy Amazon Alexa skills worth trying, chosen from New, Most Enabled Skills, Food and Drink, and Customer Favorites categories in the Alexa Skills Marketplace.

 

From DSC:
I’d like to see how the Verse of the Day skill performs.

 

 

 


Also see:


 

 


From DSC:
This topic reminds me of a slide from
my NGLS 2017 Conference presentation:

 

 


 

 

Samsung’s personal assistant Bixby will take on Amazon Alexa, Apple Siri — from theaustralian.com.au by Chris Griffith

Excerpt:

Samsung has published details of its Bixby personal assistant, which will debut on its Galaxy S8 smartphone in New York next week.

Bixby will go head-to-head with Google Assistant, Microsoft Cortana, Amazon Echo and Apple Siri, in a battle to lure you into their artificial intelligence world.

In future, the personal assistant that you like may not only influence which phone you buy, also the home automation system that you adopt.

This is because these personal assistants cross over into home use, which is why Samsung would bother with one of its own.

Given that the S8 will run Android Nougat, which includes Google Assistant, users will have two personal assistants on their phone, unless somehow one is disabled.

 

 

There are a lot of red flags with Samsung’s AI assistant in the new Galaxy S8 — from businessinsider.com by Steve Kovach

Excerpt:

There’s Siri. And Alexa. And Google Assistant. And Cortana. Now add another one of those digital assistants to the mix: Bixby, the new helper that lives inside Samsung’s latest phone, the Galaxy S8. But out of all the assistants that have launched so far, Bixby is the most curious and the most limited.

Samsung’s goal with Bixby was to create an assistant that can mimic all the functions you’re used to performing by tapping on your screen through voice commands. The theory is that phones are too hard to manage, so simply letting users tell their phone what they want to happen will make things a lot easier.

 

 

Samsung Galaxy S8: Hands on with the world’s most ambitious phone — from telegraph.co.uk by James Titcomb

Excerpt:

The S8 will also feature Bixby, Samsung’s new intelligent assistant. The company says Bixby is a bigger deal than Siri or Google Assistant – as well as simply asking for the weather, it will be deeply integrated with the phone’s everyday functions such as taking photos and sending them to people. Samsung has put a dedicated Bixby button on the S8 on the left hand side, but I wasn’t able to try it out because it won’t launch in the UK until later this year.

 

 

Samsung Galaxy S8 launch: Samsung reveals its long-awaited iPhone killer — from telegraph.co.uk by James Titcomb

 

 

 


Also see:


 

Recent years have brought some rapid development in the area of artificially intelligent personal assistants. Future iterations of the technology could fully revamp the way we interact with our devices.

 

 

 

21 bot experts make their predictions for 2017 — from venturebeat.com by Adelyn Zhou

Excerpt:

2016 was a huge year for bots, with major platforms like Facebook launching bots for Messenger, and Amazon and Google heavily pushing their digital assistants. Looking forward to 2017, we asked 21 bot experts, entrepreneurs, and executives to share their predictions for how bots will continue to evolve in the coming year.

From Jordi Torras, founder and CEO, Inbenta:
“Chatbots will get increasingly smarter, thanks to the adoption of sophisticated AI algorithms and machine learning. But also they will specialize more in specific tasks, like online purchases, customer support, or online advice. First attempts of chatbot interoperability will start to appear, with generalist chatbots, like Siri or Alexa, connecting to specialized enterprise chatbots to accomplish specific tasks. Functions traditionally performed by search engines will be increasingly performed by chatbots.”

 

 

 

 

 


From DSC:
For those of us working within higher education, chatbots need to be on our radars. Here are 2 slides from my NGLS 2017 presentation.

 

 

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 

 

 

Adobe unveils new Microsoft HoloLens and Amazon Alexa integrations — from geekwire.com by Nat Levy

 

 

 

 

Introducing the AR Landscape — from medium.com by Super Ventures
Mapping out the augmented reality ecosystem

 

 

 

 

Alibaba leads $18M investment in car navigation augmented reality outfit WayRay — from siliconangle.com by Kyt Dotson

Excerpt:

WayRay boasts the 2015 launch of Navion, what it calls the “first ever holographic navigator” for cars that uses AR technology to project a Global Positioning System, or GPS, info overlay onto the car’s windshield.

Just like a video game, users of the GPS need only follow green arrows projected as if onto the road in front of the car providing visual directions. More importantly, because the system displays on the windscreen, it does not require a cumbersome headset or eyewear worn by the driver. It integrates directly into the dashboard of the car.

The system also recognizes simple voice and gesture commands from the driver — eschewing turning of knobs or pressing buttons. The objective of the system is to allow the driver to spend more time paying attention to the road, with hands on the wheel. Many modern-day onboard GPS systems also recognize voice commands but require the driver to glance over at a screen.

 

 

Viro Media Is A Tool For Creating Simple Mobile VR Apps For Businesses — from uploadvr.com by Charles Singletary

Excerpt:

Viro Media is supplying a platform of their own and their hope is to be the simplest experience where companies can code once and have their content available on multiple mobile platforms. We chatted with Viro Media CEO Danny Moon about the tool and what creators can expect to accomplish with it.

 

 

Listen to these podcasts to dive into virtual reality — from haptic.al by Deniz Ergürel
We curated some great episodes with our friends at RadioPublic

Excerpt:

Virtual reality can transport us to new places, where we can experience new worlds and people, like no other. It is a whole new medium poised to change the future of gaming, education, health care and enterprise. Today we are starting a new series to help you discover what this new technology promises. With the help of our friends at RadioPublic, we are curating a quick library of podcasts related to virtual reality technology.

 

Psychologists using virtual reality to help treat PTSD in veterans — from kxan.com by Amanda Brandeis

Excerpt:

AUSTIN (KXAN) — Virtual reality is no longer reserved for entertainment and gamers, its helping solve real-world problems. Some of the latest advancements are being demonstrated at South by Southwest.

Dr. Skip Rizzo directs the Medical Virtual Reality Lab at the University of Southern California’s Institute for Creative Technologies. He’s helping veterans who suffer from post-traumatic stress disorder (PTSD). He’s up teamed with Dell to develop and spread the technology to more people.

 

 

 

NVIDIA Jetson Enables Artec 3D, Live Planet to Create VR Content in Real Time — from blogs.nvidia.com
While VR revolutionizes fields across everyday life — entertainment, medicine, architecture, education and product design — creating VR content remains among its biggest challenges.

Excerpt:

At NVIDIA Jetson TX2 launch [on March 7, 2017], in San Francisco, [NVIDIA] showed how the platform not only accelerates AI computing, graphics and computer vision, but also powers the workflows used to create VR content. Artec 3D debuted at the event the first handheld scanner offering real-time 3D capture, fusion, modeling and visualization on its own display or streamed to phones and tablets.

 

 

Project Empathy
A collection of virtual reality experiences that help us see the world through the eyes of another

Excerpt:

Benefit Studio’s virtual reality series, Project Empathy is a collection of thoughtful, evocative and surprising experiences by some of the finest creators in entertainment, technology and journalism.

Each film is designed to create empathy through a first-person experience–from being a child inside the U.S. prison system to being a widow cast away from society in India.  Individually, each of the films in this series presents its filmmaker’s unique vision, portraying an intimate experience through the eyes of someone whose story has been lost or overlooked and yet is integral to the larger story of our global society. Collectively, these creatively distinct films weave together a colorful tapestry of what it means to be human today.

 

 

 

 

Work in a high-risk industry? Virtual reality may soon become part of routine training — from ibtimes.cok.uk by Owen Hughes
Immersive training videos could be used to train workers in construction, mining and nuclear power.

 

 

 

At Syracuse University, more students are getting ahold of virtual reality — from dailyorange.com by Haley Kim

 

 

 

As Instructors Experiment With VR, a Shift From ‘Looking’ to ‘Interacting’ — from edsurge.com by Marguerite McNeal

Excerpt:

Most introductory geology professors teach students about earthquakes by assigning readings and showing diagrams of tectonic plates and fault lines to the class. But Paul Low is not most instructors.

“You guys can go wherever you like,” he tells a group of learners. “I’m going to go over to the epicenter and fly through and just kind of get a feel.”

Low is leading a virtual tour of the Earth’s bowels, directly beneath New Zealand’s south island, where a 7.8 magnitude earthquake struck last November. Outfitted with headsets and hand controllers, the students are “flying” around the seismic hotbed and navigating through layers of the Earth’s surface.

Low, who taught undergraduate geology and environmental sciences and is now a research associate at Washington and Lee University, is among a small group of profs-turned-technologists who are experimenting with virtual reality’s applications in higher education.

 

 

 

These University Courses Are Teaching Students the Skills to Work in VR — from uploadvr.com

Excerpt:

“As virtual reality moves more towards the mainstream through the development of new, more affordable consumer technologies, a way needs to be found for students to translate what they learn in academic situations into careers within the industry,” says Frankie Cavanagh, a lecturer at Northumbria University. He founded a company called Somniator last year with the aim not only of developing VR games, but to provide a bridge between higher education and the technology sector. Over 70 students from Newcastle University, Northumbria University and Gateshead College in the UK have been placed so far through the program, working on real games as part of their degrees and getting paid for additional work commissioned.

 

Working with VR already translates into an extraordinarily diverse range of possible career paths, and those options are only going to become even broader as the industry matures in the next few years.

 

 

Scope AR Brings Live, Interactive AR Video Support to Caterpillar Customers — from augmented.reality.news by Tommy Palladino

Excerpt:

Customer service just got a lot more interesting. Construction equipment manufacturer Caterpillar just announced official availability of what they’re calling the CAT LIVESHARE solution to customer support, which builds augmented reality capabilities into the platform. They’ve partnered with Scope AR, a company who develops technical support and training documentation tools using augmented reality. The CAT LIVESHARE support system uses Scope AR’s Remote AR software as the backbone.

 

 

 

New virtual reality tool helps architects create dementia-friendly environments — from dezzen.com by Jessica Mairs

 

Visual showing appearance of a room without and with the Virtual Reality Empathy Platform headset

 

 

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 
© 2016 Learning Ecosystems