Augmented reality glasses could replace staff training — from stuff.co.nz by Madison Reidy

Excerpt (emphasis DSC):

In five years, anyone could put on a pair of augmented reality glasses and know how to work a factory, an augmented reality company claims.

Los Angeles based company Daqri International recently released its ‘smart glasses’ for factory floor staff.

Daqri general manager Paul Sweeney said that when the technology became mainstream, it would get rid of engineering education.

“In the next five years or so we will probably not have classroom training, they will just have training on their head, on the job.”

Auckland based Fisher & Paykel Production Machinery (PML) has taken to the trend and added augmented reality tasks to its factory’s maintenance system.

PML industry 4.0 technology manager John West said it made its unskilled factory floor workers “instant experts”.

 

 

 

 

 

From DSC:
In terms of learning, having to be in the same physical place as others continues to not be a requirement nearly as much as it used to be. But I’m not just talking about online learning here. I’m talking about a new type of learning environment that involves both hardware and software to facilitate collaboration (and it was designed that way from day 1). These new types of setups can provide us with new opportunities and affordances that we should begin experimenting with immediately.

Check out the following products — all of which allow a person to contribute to a discussion or conversation from anywhere they can get Internet access:

When you go to those sites, you will see words and phrase such as:

  • Visual collaboration software
  • Virtual workspace
  • Develop
  • Share
  • Inspire
  • Design
  • Global teams
  • A visual collaboration solution that links locations, teams, content, and devices in an immersive, shared workspace
  • Teamwork
  • Create and brainstorm with others
  • Digital workplace platform
  • Eliminate the distance between in-office and remote employees
  • Jumpstart spontaneous brainstorms and working sessions

So using these types of software and hardware setups, I can contribute regardless of where I’m located. Remote learning — from anywhere in the world — being combined with our face-to-face based classrooms.

Also, the push for Active Learning Classrooms (ALCs) continues across higher education. Such hands-on, project-learning based, student-centered approaches fit extremely well with the collaboration setups mentioned above.

Then, there’s the insight from Simon Dudley in this article:

“…video conferencing is increasingly an application within in a larger workflow…”

Lastly, if colleges and universities don’t have the funds to maintain their physical plants, look for higher education to move increasingly online — and these types of solutions could play a significant role in that environment. Plus, for working adults who need to reinvent themselves, this is an extremely efficient means of picking up some new skills and competencies.

So the growth of these types of setups — where the software and hardware work together to support worldwide collaboration — will likely create a powerful, new, emerging piece of our learning ecosystems.

 



 

 

 

 

 

 

 

 

 

 



 

Remote learning — from anywhere in the world — being combined with our face-to-face based classrooms.

 



 

 

From DSC:
It seems to me that we are right on the precipice of major changes — throughout the globe — that are being introduced by the growing use and presence of automation, robotics, and artificial intelligence/machine learning/deep learning, as well as other emerging technologies. But it’s not just the existence of these technologies, but it’s also that the pace of adoption of these technologies continues to increase.

These things made me wonder….what are the ramifications of the graphs below — and this new trajectory/pace of change that we’re on — for how we accredit new programs within higher education?

For me, it speaks to the need for those of us who are working within higher education to be more responsive, and we need to increase our efforts to provide more lifelong learning opportunities. People are going to need to reinvent themselves over and over again. In order for higher education to be of the utmost service to people, the time that it takes to accredit a program must be greatly reduced in cost and in time.


 

 

 

 

 

 

 

 

 

 

 


Somewhat relevant addendums:


 

A quote from “Response: What Teaching in the Year 2047 Might Look Like

To end the metaphor, what I am simply trying to say is that schools cannot afford to evolve at ¼ of the pace the world is around it and not face the possibility of becoming dangerously irrelevant. So, to answer the question – do I think the classrooms of 2040 look like the classrooms of today? Yes, I think they look more like them than they do not. Unfortunately, in my opinion, that is not the way to best serve our kids in our ever-changing world. Let me be clear, great teaching and instruction has not fundamentally changed in the past 2000 years and will not in the next 30. The context of learning and doing our best to meet the needs of the society we are preparing kids for is how and why schools must be revolutionized, not simply evolve at their own pace.

 


An excerpt from “
The global forces inspiring a new narrative of progress” (from mckinsey.com by Ezra Greenberg, Martin Hirt, and Sven Smit; emphasis DSC):

The next three tensions highlight accelerating industry disruption. Digitization, machine learning, and the life sciences are advancing and combining with one another to redefine what companies do and where industry boundaries lie. We’re not just being invaded by a few technologies, in other words, but rather are experiencing a combinatorial technology explosion. Customers are reaping some of the rewards, and our notions of value delivery are changing. In the words of Alibaba’s Jack Ma, B2C is becoming “C2B,” as customers enjoy “free” goods and services, personalization, and variety. And the terms of competition are changing: as interconnected networks of partners, platforms, customers, and suppliers become more important, we are experiencing a business ecosystem revolution.

 

38% of American Jobs Could be Replaced by Robots, According to PwC Report — from bigthink.com by David Ryan Polgar

Excerpt:

Nearly 4 out of 10 American jobs may be replaced through automation by the early 2030s, according to a new report by Price Waterhouse Cooper (PwC). In the report, the United States was viewed as the country most likely to lost jobs through automation–ahead of the UK, Germany, and Japan. This is probably not what the current administration had in mind with an “America First” policy.

 

 

 

 

Tech giants grapple with the ethical concerns raised by the AI boom — from technologyreview.com by Tom Simonite
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

Excerpt:

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations.

“We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

 

 

The disruption of digital learning: Ten things we have learned — from joshbersin.com

Excerpt:

Over the last few months I’ve had a series of meetings with Chief Learning Officers, talent management leaders, and vendors of next generation learning tools. My goal has been simple: try to make sense of the new corporate learning landscape, which for want of a better word, we can now call “Digital Learning.” In this article I’d like to share ten things to think about, with the goal of helping L&D professionals, HR leaders, and business leaders understand how the world of corporate learning has changed.

 

Digital Learning does not mean learning on your phone, it means “bringing learning to where employees are.” 

It is a “way of learning” not a “type of learning.”

 

 

 

 

 

 

The traditional LMS is no longer the center of corporate learning, and it’s starting to go away.

 

 

 

What Josh calls a Distributed Learning Platform, I call a Learning Ecosystem:

 

 



Also see:

  • Watch Out, Corporate Learning: Here Comes Disruption — from forbes.com by Josh Bersin
    Excerpt:
    The corporate training market, which is over $130 billion in size, is about to be disrupted. Companies are starting to move away from their Learning Management Systems (LMS), buy all sorts of new tools for digital learning, and rebuild a whole new infrastructure to help employees learn. And the impact of GSuite,  Microsoft Teams, Slack, and Workplace by Facebook could be enormous.

    We are living longer, jobs are changing faster than ever, and automation is impinging on our work lives more every day. If we can’t look things up, learn quickly, and find a way to develop new skills at work, most of us would prefer to change jobs, rather than stay in a company that doesn’t let us reinvent ourselves over time.

 



 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

From DSC:
In the future, will Microsoft — via data supplied by LinkedIn and Lynda.com — use artificial intelligence, big data, and blockchain-related technologies to match employers with employees/freelancers?  If so, how would this impact higher education? Badging? Credentialing?

It’s something to put on our radars.

 

 

 

 

 

Excerpt:

A sneak peak on Recruitment in AI era
With global talent war at its peak, organisations are now looking at harnessing Artificial Intelligence (AI) capabilities, to use search optimisation tools, data analytics, and talent mapping to reach out to the right talent for crucial job roles. Technology has been revolutionising the way recruitment works with the entire process being now automated with ATS and other talent management softwares. This saves time and costs involved with recruiting for HR managers, whilst allowing them to do away with third-party service providers for talent sourcing such as employment bureaus and traditional recruitment agencies. With modern talent acquisition technology empowered by AI, the time taken for recruitment is halved and search narrowed to reach out to only the best talent that matches job requirements. There is no need for human intervention and manual personality matching to choose the best candidates for suitable job roles.

Talent mapping, with the help of big data, is definitely the next step in recruitment technology. With talent mapping, recruiters can determine their candidate needs well in advance and develop a strategic plan for hiring long-term. This includes filling any skill gaps, bolstering the team for sudden changes in the workplace, or just simply having suitable talent in mind for the future. All of these, when prepared ahead of time, can save companies the trouble and time in future. Recruiters who are able to understand how AI works, harness the technology to save on time and costs will be rewarded with improved quality of hires, enhanced efficiency, more productive workforce and less turnover.

 

Growth of AI Means We Need To Retrain Workers… Now — from forbes.com by Ryan Wibberley

Excerpt:

On the more positive side, AI could take over mundane, repetitive tasks and enable the workers who perform them to take on more interesting and rewarding work. But that will also mean many workers will need to be retrained. If you’re in a business where AI-based automation could be a potentially significant disruptor, then the time to invest in worker training and skill development is now. One could argue that AI will impact just about every industry. For example, in the financial services industry, we have already seen the creation of the robo advisor. While I don’t believe that the robo advisor will fully replace the human financial advisor because of the emotional aspects of investing, I do believe that it will play a part in the relationship with an advisor and his/her client.

 

HarvardX rolls out new adaptive learning feature in online course — from edscoop.com by Corinne Lestch
Students in MOOC adaptive learning experiment scored nearly 20 percent better than students using more traditional learning approaches.

Excerpt:

Online courses at Harvard University are adapting on the fly to students’ needs.

Officials at the Cambridge, Massachusetts, institution announced a new adaptive learning technology that was recently rolled out in a HarvardX online course. The feature offers tailored course material that directly correlates with student performance while the student is taking the class, as well as tailored assessment algorithms.

HarvardX is an independent university initiative that was launched in parallel with edX, the online learning platform that was created by Harvard and Massachusetts Institute of Technology. Both HarvardX and edX run massive open online courses. The new feature has never before been used in a HarvardX course, and has only been deployed in a small number of edX courses, according to officials.

 

 

From DSC:
Given the growth of AI, this is certainly radar worthy — something that’s definitely worth pulse-checking to see where opportunities exist to leverage these types of technologies.  What we now know of as adaptive learning will likely take an enormous step forward in the next decade.

IBM’s assertion rings in my mind:

 

 

I’m cautiously hopeful that these types of technologies can extend beyond K-12 and help us deal with the current need to be lifelong learners, and the need to constantly reinvent ourselves — while providing us with more choice, more control over our learning. I’m hopeful that learners will be able to pursue their passions, and enlist the help of other learners and/or the (human) subject matter experts as needed.

I don’t see these types of technologies replacing any teachers, professors, or trainers. That said, these types of technologies should be able to help do some of the heavy teaching and learning lifting in order to help someone learn about a new topic.

Again, this is one piece of the Learning from the Living [Class] Room that we see developing.

 

 

 

 

This Mobile VR Crane Simulator Showcases the Future of Industrial Training — from roadtovr.com by Dominic Brennan

 

 

Description from an Inside VR & AR newsletter:

The Mobile Crane Simulator combines an Oculus headset with a modular rig to greatly reduce the cost of training. The system, from Industrial Training International and Serious Labs, Inc, will debut at the ConExpo Event this March in Las Vegas. The designers chose the Oculus for its comfort and portability, but the set-up supports OpenVR, allowing it to potentially also work on the Vive. (The “mobile” in the device’s name refers to a type of crane, rather than to mobile VR.) – ROAD TO VR

 

 
© 2016 Learning Ecosystems