The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

A smorgasboard of ideas to put on your organization’s radar! [Christian]

From DSC:
At the Next Generation Learning Spaces Conference, held recently in San Diego, CA, I moderated a panel discussion re: AR, VR, and MR.  I started off our panel discussion with some introductory ideas and remarks — meant to make sure that numerous ideas were on the radars at attendees’ organizations. Then Vinay and Carrie did a super job of addressing several topics and questions (Mary was unable to make it that day, as she got stuck in the UK due to transportation-related issues).

That said, I didn’t get a chance to finish the second part of the presentation which I’ve listed below in both 4:3 and 16:9 formats.  So I made a recording of these ideas, and I’m relaying it to you in the hopes that it can help you and your organization.

 


Presentations/recordings:


 

Audio/video recording (187 MB MP4 file)

 

 


Again, I hope you find this information helpful.

Thanks,
Daniel

 

 

 

From DSC:
Chatbots were another one of the topics I mentioned at the Next Generation Learning Spaces Conference last week. For those of us working within higher education, chatbots need to be on our radars!

 

 

An article from today on this:

  • Using AI Chatbots to Freeze ‘Summer Melt’ in Higher Ed — from campustechnology.com by Sri Ravipati
    Excerpt:
    Students who accept offers of admission into a college or university don’t always show up for fall enrollment — a phenomenon known as “summer melt.” It’s a problem that Georgia State University (GSU) is all too familiar with: The institution’s summer melt rates have increased from 12 percent to nearly 19 percent in recent years. With traditional methods of reaching students (i.e. snail mail, e-mail and phone calls) producing feeble results, GSU decided to try another approach: smart text messaging.

    According to a recent case study, GSU was well aware of the advantages of communicating with students via text messages, but was concerned about the additional workload that text messaging students would place on existing staff. So, the university partnered with AdmitHub, a Boston-based ed tech startup, to test out text-based intervention. AdmitHub works with higher ed institutions to create a virtual “campus coach” that embodies the collective knowledge and unique spirit of a school’s community. It integrates conversational artificial intelligence (AI) with human expertise to guide students to and through college.

 

One of the slides from my presentation on this:

 

 

 

Summer 2017 Human++ — fromcambridge.nuvustudio.com
Human-Machine Intelligence, Hacking Drones, Bio Fashion, Augmented Video Games, Aerial Filmmaking, Smart Tools, Soft Robotics and more!

Excerpt:

NuVu is a place where young students grow their spirit of innovation. They use their curiosity and creativity to explore new ideas, and make their concepts come to life through our design process. Our model is based on the architecture studio model, and every Summer we use imaginative themes to frame two-week long Studios in which students dive into hands-on design, engineering, science, technology, art and more!

 

 

No hype, just fact: What artificial intelligence is – in simple business terms — from zdnet.com by Michael Krigsman
AI has become one of the great, meaningless buzzwords of our time. In this video, the Chief Data Scientist of Dun and Bradstreet explains AI in clear business terms.

Excerpt:

How do terms like machine learning, AI, and cognitive computing relate to one another?
They’re not synonymous. So, cognitive computing is very different than machine learning, and I will call both of them a type of AI. Just to try and describe those three. So, I would say artificial intelligence is all of that stuff I just described. It’s a collection of things designed to either mimic behavior, mimic thinking, behave intelligently, behave rationally, behave empathetically. Those are the systems and processes that are in the collection of soup that we call artificial intelligence.

Cognitive computing is primarily an IBM term. It’s a phenomenal approach to curating massive amounts of information that can be ingested into what’s called the cognitive stack. And then to be able to create connections among all of the ingested material, so that the user can discover a particular problem, or a particular question can be explored that hasn’t been anticipated.

Machine learning is almost the opposite of that. Where you have a goal function, you have something very specific that you try and define in the data. And, the machine learning will look at lots of disparate data, and try to create proximity to this goal function ? basically try to find what you told it to look for. Typically, you do that by either training the system, or by watching it behave, and turning knobs and buttons, so there’s unsupervised, supervised learning. And that’s very, very different than cognitive computing.

 

 

 

 

 

 

Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 

LinkedIn debuts new desktop version with ‘chatbots’, new search and more — from techcrunch.com by Ingrid Lunden

Excerpts:

After a short preview in September last year, and the small matter of getting acquired for $26.2 billion by Microsoft, today LinkedIn — the social network for the professional world with close to 470 million users — is finally unveiling its redesign of its desktop site, complete with a simpler, app-like look; a new messaging experience that appears as a pop-up on the homepage; and updated search features. LinkedIn said the new desktop version will be rolling out globally over the next few weeks.

Two of the most notable changes in this new version are how LinkedIn is approaching private messaging, and also how it’s moving to make its public messaging — its feeds — better and more used.

And now, LinkedIn is introducing its own form of ‘chatbots’. To be clear, I’m referring to these as chatbots, but LinkedIn is not — it doesn’t follow you and steer the whole conversation, but it’s a little helper that will suggest things to talk about to “break the ice” with people. It will also suggest people you can chat with at companies if you aren’t even at the ice-breaking point.

The feed itself is going to be filled with significantly more algorithms to suggest content to you — both organic and sponsored, “native” content — a new ad unit that LinkedIn is hoping to expand. Among the organic content, you’ll be seeing more suggestions of people to follow, as well as more content from influencers alongside news curated by LinkedIn itself and its editorial team.

 

 

 

 
 
© 2016 Learning Ecosystems