The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 
 

SXSW Announces Winners for 2017 Accelerator Pitch Event — from prnewswire.com
Pitch competition showcased global startups featuring cutting-edge innovation in 10 technology categories

Excerpt:

The winners of the 2017 SXSW Accelerator Pitch Event are:

 

 

 

 

 

From DSC:
In the future, will Microsoft — via data supplied by LinkedIn and Lynda.com — use artificial intelligence, big data, and blockchain-related technologies to match employers with employees/freelancers?  If so, how would this impact higher education? Badging? Credentialing?

It’s something to put on our radars.

 

 

 

 

 

Excerpt:

A sneak peak on Recruitment in AI era
With global talent war at its peak, organisations are now looking at harnessing Artificial Intelligence (AI) capabilities, to use search optimisation tools, data analytics, and talent mapping to reach out to the right talent for crucial job roles. Technology has been revolutionising the way recruitment works with the entire process being now automated with ATS and other talent management softwares. This saves time and costs involved with recruiting for HR managers, whilst allowing them to do away with third-party service providers for talent sourcing such as employment bureaus and traditional recruitment agencies. With modern talent acquisition technology empowered by AI, the time taken for recruitment is halved and search narrowed to reach out to only the best talent that matches job requirements. There is no need for human intervention and manual personality matching to choose the best candidates for suitable job roles.

Talent mapping, with the help of big data, is definitely the next step in recruitment technology. With talent mapping, recruiters can determine their candidate needs well in advance and develop a strategic plan for hiring long-term. This includes filling any skill gaps, bolstering the team for sudden changes in the workplace, or just simply having suitable talent in mind for the future. All of these, when prepared ahead of time, can save companies the trouble and time in future. Recruiters who are able to understand how AI works, harness the technology to save on time and costs will be rewarded with improved quality of hires, enhanced efficiency, more productive workforce and less turnover.

 

From DSC:
At the recent
Next Generation Learning Spaces Conference, in my introductory piece for our panel discussion, I relayed several ideas/areas that should be on our institutions’ radars. That is, at least someone at each of our institutions of higher education should be aware of these things and be pulse-checking them as time goes by.

 

 

 

 

 

 

 

 

 

 

 

One of these ideas/areas involved the use of blockchain technologies:

 

 

If #blockchain technologies are successful within the financial/banking world, then it’s highly likely that other use cases will be developed as well (i.e., the trust in blockchain-enabled applications will be there already).

Along those lines, if that occurs, then colleges and universities are likely to become only 1 of the feeds into someone’s cloud-based, lifelong learning profile. I’ve listed several more sources of credentials below:

 

 

Given the trend towards more competency-based education (CBE) and the increased experimentation with badges, blockchain could increasingly move onto the scene.

In fact, I could see a day when an individual learner will be able to establish who can and can’t access their learner profile, and who can and can’t feed information and updates into it.

Artificial intelligence and big data also come to mind here…and I put Microsoft on my radar a while back in this regard; as Microsoft (via LinkedIn and Lynda.com) could easily create online-based marketplaces matching employers with employees/freelancers.

 

 

 


Along these lines, see:


 

  • The Mainstreaming of Alternative Credentials in Postsecondary Education — from by Deborah Keyek-Franssen
    Excerpt:

    • The Context of Alternative Credentials
      The past few years have seen a proliferation of new learning credentials ranging from badges and bootcamp certifications to micro-degrees and MOOC certificates. Although alternative credentials have been part of the fabric of postsecondary education and professional development for decades—think prior learning assessments like Advanced Placement or International Baccalaureate exams, or industry certifications—postsecondary institutions are increasingly unbundling their degrees and validating smaller chunks of skills and learning to provide workplace value to traditional and non-traditional students alike.
      Many are experimenting with alternative credentials to counter the typical binary nature of a degree. Certifications of learning or skills are conferred after the completion of a course or a few short courses in a related field. Students do not have to wait until all requirements for a degree are met before receiving a certificate of learning, but instead can receive one after a much shorter period of study. “Stackable” credentials are combined to be the equivalent of an undergraduate or graduate certificate (a micro-degree), or even a degree.
    • The National Discussion of Alternative Credentials
      Discussions of alternative credentials are often responses to a persistent and growing critique of traditional higher educational institutions’ ability to meet workforce needs, especially because the cost to students for a four-year degree has grown dramatically over the past several decades. The increasing attention paid to alternative credentials brings to the fore questions such as what constitutes a postsecondary education, what role universities in particular should play vis-à-vis workforce development, and how we can assess learning and mastery.

 

 


Addendums added on 3/4/17, that show that this topic isn’t just for higher education, but could involve K-12 as well:


 

 

 

 

 

HarvardX rolls out new adaptive learning feature in online course — from edscoop.com by Corinne Lestch
Students in MOOC adaptive learning experiment scored nearly 20 percent better than students using more traditional learning approaches.

Excerpt:

Online courses at Harvard University are adapting on the fly to students’ needs.

Officials at the Cambridge, Massachusetts, institution announced a new adaptive learning technology that was recently rolled out in a HarvardX online course. The feature offers tailored course material that directly correlates with student performance while the student is taking the class, as well as tailored assessment algorithms.

HarvardX is an independent university initiative that was launched in parallel with edX, the online learning platform that was created by Harvard and Massachusetts Institute of Technology. Both HarvardX and edX run massive open online courses. The new feature has never before been used in a HarvardX course, and has only been deployed in a small number of edX courses, according to officials.

 

 

From DSC:
Given the growth of AI, this is certainly radar worthy — something that’s definitely worth pulse-checking to see where opportunities exist to leverage these types of technologies.  What we now know of as adaptive learning will likely take an enormous step forward in the next decade.

IBM’s assertion rings in my mind:

 

 

I’m cautiously hopeful that these types of technologies can extend beyond K-12 and help us deal with the current need to be lifelong learners, and the need to constantly reinvent ourselves — while providing us with more choice, more control over our learning. I’m hopeful that learners will be able to pursue their passions, and enlist the help of other learners and/or the (human) subject matter experts as needed.

I don’t see these types of technologies replacing any teachers, professors, or trainers. That said, these types of technologies should be able to help do some of the heavy teaching and learning lifting in order to help someone learn about a new topic.

Again, this is one piece of the Learning from the Living [Class] Room that we see developing.

 

 

 

 

No hype, just fact: What artificial intelligence is – in simple business terms — from zdnet.com by Michael Krigsman
AI has become one of the great, meaningless buzzwords of our time. In this video, the Chief Data Scientist of Dun and Bradstreet explains AI in clear business terms.

Excerpt:

How do terms like machine learning, AI, and cognitive computing relate to one another?
They’re not synonymous. So, cognitive computing is very different than machine learning, and I will call both of them a type of AI. Just to try and describe those three. So, I would say artificial intelligence is all of that stuff I just described. It’s a collection of things designed to either mimic behavior, mimic thinking, behave intelligently, behave rationally, behave empathetically. Those are the systems and processes that are in the collection of soup that we call artificial intelligence.

Cognitive computing is primarily an IBM term. It’s a phenomenal approach to curating massive amounts of information that can be ingested into what’s called the cognitive stack. And then to be able to create connections among all of the ingested material, so that the user can discover a particular problem, or a particular question can be explored that hasn’t been anticipated.

Machine learning is almost the opposite of that. Where you have a goal function, you have something very specific that you try and define in the data. And, the machine learning will look at lots of disparate data, and try to create proximity to this goal function ? basically try to find what you told it to look for. Typically, you do that by either training the system, or by watching it behave, and turning knobs and buttons, so there’s unsupervised, supervised learning. And that’s very, very different than cognitive computing.

 

 

 

 

 

 

IBM to Train 25 Million Africans for Free to Build Workforce — from by Loni Prinsloo
* Tech giant seeking to bring, keep digital jobs in Africa
* Africa to have world’s largest workforce by 2040, IBM projects

Excerpt:

International Business Machines Corp. is ramping up its digital-skills training program to accommodate as many as 25 million Africans in the next five years, looking toward building a future workforce on the continent. The U.S. tech giant plans to make an initial investment of 945 million rand ($70 million) to roll out the training initiative in South Africa…

 

Also see:

IBM Unveils IT Learning Platform for African Youth — from investopedia.com by Tim Brugger

Excerpt (emphasis DSC):

Responding to concerns that artificial intelligence (A.I.) in the workplace will lead to companies laying off employees and shrinking their work forces, IBM (NYSE: IBM) CEO Ginni Rometty said in an interview with CNBC last month that A.I. wouldn’t replace humans, but rather open the door to “new collar” employment opportunities.

IBM describes new collar jobs as “careers that do not always require a four-year college degree but rather sought-after skills in cybersecurity, data science, artificial intelligence, cloud, and much more.”

In keeping with IBM’s promise to devote time and resources to preparing tomorrow’s new collar workers for those careers, it has announced a new “Digital-Nation Africa” initiative. IBM has committed $70 million to its cloud-based learning platform that will provide free skills development to as many as 25 million young people in Africa over the next five years.

The platform will include online learning opportunities for everything from basic IT skills to advanced training in social engagement, digital privacy, and cyber protection. IBM added that its A.I. computing wonder Watson will be used to analyze data from the online platform, adapt it, and help direct students to appropriate courses, as well as refine the curriculum to better suit specific needs.

 

 

From DSC:
That last part, about Watson being used to personalize learning and direct students to appropropriate courses, is one of the elements that I see in the Learning from the Living [Class]Room vision that I’ve been pulse-checking for the last several years. AI/cognitive computing will most assuredly be a part of our learning ecosystems in the future.  Amazon is currently building their own platform that adds 100 skills each day — and has 1000 people working on creating skills for Alexa.  This type of thing isn’t going away any time soon. Rather, I’d say that we haven’t seen anything yet!

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

And Amazon has doubled down to develop Alexa’s “skills,” which are discrete voice-based applications that allow the system to carry out specific tasks (like ordering pizza for example). At launch, Alexa had just 20 skills, which has reportedly jumped to 5,200 today with the company adding about 100 skills per day.

In fact, Bezos has said, “We’ve been working behind the scenes for the last four years, we have more than 1,000 people working on Alexa and the Echo ecosystem … It’s just the tip of the iceberg. Just last week, it launched a new website to help brands and developers create more skills for Alexa.

Source

 

 

Also see:

 

“We are trying to make education more personalised and cognitive through this partnership by creating a technology-driven personalised learning and tutoring,” Lula Mohanty, Vice President, Services at IBM, told ET. IBM will also use its cognitive technology platform, IBM Watson, as part of the partnership.

“We will use the IBM Watson data cloud as part of the deal, and access Watson education insight services, Watson library, student information insights — these are big data sets that have been created through collaboration and inputs with many universities. On top of this, we apply big data analytics,” Mohanty added.

Source

 

 


 

Also see:

  • Most People in Education are Just Looking for Faster Horses, But the Automobile is Coming — from etale.org by Bernard Bull
    Excerpt:
    Most people in education are looking for faster horses. It is too challenging, troubling, or beyond people’s sense of what is possible to really imagine a completely different way in which education happens in the world. That doesn’t mean, however, that the educational equivalent of the automobile is not on its way. I am confident that it is very much on its way. It might even arrive earlier than even the futurists expect. Consider the following prediction.

 


 

 

 

Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 

Per X Media Lab:

The authoritative CB Insights lists imminent Future Tech Trends: customized babies; personalized foods; robotic companions; 3D printed housing; solar roads; ephemeral retail; enhanced workers; lab-engineered luxury; botroots movements; microbe-made chemicals; neuro-prosthetics; instant expertise; AI ghosts. You can download the whole outstanding report here (125 pgs).

 

From DSC:
Though I’m generally pro-technology, there are several items in here which support the need for all members of society to be informed and have some input into if and how these technologies should be used. Prime example: Customized babies.  The report discusses the genetic modification of babies: “In the future, we will choose the traits for our babies.” Veeeeery slippery ground here.

 

Below are some example screenshots:

 

 

 

 

 

 

 

 

 

Also see:

CBInsights — Innovation Summit

  • The New User Interface: The Challenge and Opportunities that Chatbots, Voice Interfaces and Smart Devices Present
  • Fusing the physical, digital and biological: AI’s transformation of healthcare
  • How predictive algorithms and AI will rule financial services
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future
  • The Next Industrial Age: The New Revenue Sources that the Industrial Internet of Things Unlocks
  • The AI-100: 100 Artificial Intelligence Startups That You Better Know
  • Autonomous Everything: How Connected Vehicles Will Change Mobility and Which Companies Will Own this Future

 

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 

The 2017 Top 10 IT Issues — a new report from Educause
It’s all about student success.

Excerpt:

Colleges and universities are concentrating on student success to address concerns about the costs, value, and outcomes of higher education. Student success initiatives are making use of every available resource and opportunity and involving every relevant stakeholder. Institutional technology is all three: resource, opportunity, and stakeholder.

The 2017 issues list identifies the four focus areas for higher education information technology:

  • Develop the IT foundations
  • Develop the data foundations
  • Ensure effective leadership
  • Enable successful students

These issues and focus areas are not just about today. Higher education information technology is very clearly building foundations for student success to last into the future.

 

 

 


 

Also see:

Educause Announces Top IT Issues, Trends and Tech Report for 2017 — from campustechnology.com by Dian Schaffhauser

Excerpt:

Expanding on the preview of its annual ranking of IT issues for higher education released last fall, Educause today announced its full report on the key issues, trends and technologies poised to impact higher ed in 2017. The prevailing themes across the board, according to the higher education technology association with a membership of 2,100 colleges, universities and other education organizations: information security, student success and data-informed decision-making.

The top 10 IT issues for 2017, reiterated in today’s report:

  1. Information security;
  2. Student success and completion;
  3. Data-informed decision-making;
  4. Strategic leadership;
  5. Sustainable funding;
  6. Data management and governance;
  7. Higher education affordability;
  8. Sustainable staffing;
  9. Next-generation enterprise IT; and
  10. Digital transformation of learning.

 

 

 

Robots will take jobs, but not as fast as some fear, new report says — from nytimes.com by Steve Lohr

 

Excerpt:

The robots are coming, but the march of automation will displace jobs more gradually than some alarming forecasts suggest.

A measured pace is likely because what is technically possible is only one factor in determining how quickly new technology is adopted, according to a new study by the McKinsey Global Institute. Other crucial ingredients include economics, labor markets, regulations and social attitudes.

The report, which was released Thursday, breaks jobs down by work tasks — more than 2,000 activities across 800 occupations, from stock clerk to company boss. The institute, the research arm of the consulting firm McKinsey & Company, concludes that many tasks can be automated and that most jobs have activities ripe for automation. But the near-term impact, the report says, will be to transform work more than to eliminate jobs.

 

So while further automation is inevitable, McKinsey’s research suggests that it will be a relentless advance rather than an economic tidal wave.

 

 

Harnessing automation for a future that works — from mckinsey.com by James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George, Paul Willmott, and Martin Dewhurst
Automation is happening, and it will bring substantial benefits to businesses and economies worldwide, but it won’t arrive overnight. A new McKinsey Global Institute report finds realizing automation’s full potential requires people and technology to work hand in hand.

Excerpt:

Recent developments in robotics, artificial intelligence, and machine learning have put us on the cusp of a new automation age. Robots and computers can not only perform a range of routine physical work activities better and more cheaply than humans, but they are also increasingly capable of accomplishing activities that include cognitive capabilities once considered too difficult to automate successfully, such as making tacit judgments, sensing emotion, or even driving. Automation will change the daily work activities of everyone, from miners and landscapers to commercial bankers, fashion designers, welders, and CEOs. But how quickly will these automation technologies become a reality in the workplace? And what will their impact be on employment and productivity in the global economy?

The McKinsey Global Institute has been conducting an ongoing research program on automation technologies and their potential effects. A new MGI report, A future that works: Automation, employment, and productivity, highlights several key findings.

 

 



Also related/see:

This Japanese Company Is Replacing Its Staff With Artificial Intelligence — from fortune.com by Kevin Lui

Excerpt:

The year of AI has well and truly begun, it seems. An insurance company in Japan announced that it will lay off more than 30 employees and replace them with an artificial intelligence system.  The technology will be based on IBM’s Watson Explorer, which is described as having “cognitive technology that can think like a human,” reports the Guardian. Japan’s Fukoku Mutual Life Insurance said the new system will take over from its human counterparts by calculating policy payouts. The company said it hopes the AI will be 30% more productive and aims to see investment costs recouped within two years. Fukoku Mutual Life said it expects the $1.73 million smart system—which costs around $129,000 each year to maintain—to save the company about $1.21 million each year. The 34 staff members will officially be replaced in March.

 


Also from “The Internet of Everything” report in 2016 by BI Intelligence:

 

 


 

A Darker Theme in Obama’s Farewell: Automation Can Divide Us — from nytimes.com by Claire Cain Miller

Excerpt:

Underneath the nostalgia and hope in President Obama’s farewell address Tuesday night was a darker theme: the struggle to help the people on the losing end of technological change.

“The next wave of economic dislocations won’t come from overseas,” Mr. Obama said. “It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.”


Artificial Intelligence, Automation, and the Economy — from whitehouse.gov by Kristin Lee

Summary:
[On 12/20/16], the White House released a new report on the ways that artificial intelligence will transform our economy over the coming years and decades.

 Although it is difficult to predict these economic effects precisely, the report suggests that policymakers should prepare for five primary economic effects:

    Positive contributions to aggregate productivity growth;
Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;
Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;
Churning of the job market as some jobs disappear while others are created; and
The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses.


 

From DSC:
Hmmm…this is interesting! I ran into a company based out of Canada called Sightline Innovation — and they offer Machine Learning as a Service!

 

Here’s an excerpt from their site:

MLaaS: AI for everyone
Sightline’s Machine Learning as a Service (MLaaS) is the AI solution for Enterprise. With MLaaS, you provide the data and the desired outcome, and Sightline provides the Machine Learning capacity. By analyzing data sets, MLaaS generates strategic insights that allow companies to optimize their business processes and maximize efficiency. Discover new approaches to time management, teamwork and collaboration, client service and business forecasting.

Mine troves of inert customer data to reveal sales pipeline bottlenecks, build more in-depth personas and discover opportunities for upsales.
MLaaS empowers Enterprise to capitalize on opportunities that were previously undiscovered. MLaaS.net is the only system that brings together a full spectrum of AI algorithms including:

  • Convolutional Neural Networks
  • Deep Nets
  • Restricted Boltzman Machines
  • Probabilistic Graphical Models; and
  • Bayesian Networks

I wonder if Machine Learning as a Service (MLaaS) is the way that many businesses in the future will tap into the power of AI-based solutions – especially smaller and mid-size companies who can’t afford to build an internal team focused on AI…?

 

 

Six trends that will make business more intelligent in 2017 — from itproportal.com by Daniel Fallmann
The business world is in the midst of a digital transformation that is quickly separating the wheat from the chaff.

 

Also see:

 

 

 

 

 

 
© 2024 | Daniel Christian