Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

 

From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.

 

Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

 

 

To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy

Excerpt:

The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.

 

 

Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick

Excerpt:

Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.

 

“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”

 

So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.

 

 

Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?

 

 

Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite

Excerpt:

What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.

 

 

How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly

Excerpt:

WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.

 

 

Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt:

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

 

 

 

 

From DSC:
Here’s a quote that has been excerpted from the announcement below…and it’s the type of service that will be offered in our future learning ecosystems — our next generation learning platforms:

 

Career Insight™ enables prospective students to identify programs of study which can help them land the careers they want: Career Insight™ describes labor market opportunities associated with programs of study to prospective students. The recommendation engine also matches prospective students to programs based on specific career interests.

 

But in addition to our future learning platforms pointing new/prospective students to physical campuses, the recommendation engines will also provide immediate access to digital playlists for the prospective students/learners to pursue from their living rooms (or as they are out and about…i.e., mobile access).

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

Artificial intelligence working with enormous databases to build/update recommendation engines…yup, I could see that. Lifelong learning. Helping people know what to reinvent themselves to.

 

 


 

Career Insight™ Lets Prospective Students Connect Academic Program Choices to Career Goals — from burning-glass.com; also from Hadley Dreibelbis from Finn Partners
New Burning Glass Technologies Product Brings Job Data into Enrollment Decisions

BOSTON—Burning Glass Technologies announces the launch of Career Insight™, the first tool to show prospective students exactly how course enrollment will advance their careers.

Embedded in institutional sites and powered by Burning Glass’ unparalleled job market data, Career Insight’s personalized recommendation engine matches prospective students with programs based on their interests and goals. Career Insight will enable students to make smarter decisions, as well as improve conversion and retention rates for postsecondary institutions.

“A recent Gallup survey found that 58% of students say career outcomes are the most important reason to continue their education,” Burning Glass CEO Matthew Sigelman said. “That’s particularly true for the working learners who are now the norm on college campuses. Career Insight™ is a major step in making sure that colleges and universities can speak their language from the very first.”

Beginning an educational program with a firm, realistic career goal can help students persist in their studies. Currently only 29% of students in two-year colleges and 59% of those in four-year institutions complete their degrees within six years.

Career Insight™ enables prospective students to identify programs of study which can help them land the careers they want:

  • Career Insight™ describes labor market opportunities associated with programs of study to prospective students. The recommendation engine also matches prospective students to programs based on specific career interests.
  • The application provides insights to enrollment, advising, and marketing teams into what motivates prospective students, analysis that will guide the institution in improving program offerings and boosting conversion.
  • Enrollment advisors can also walk students through different career and program scenarios in real time.

Career Insight™ is driven by the Burning Glass database of a billion job postings and career histories, collected from more than 40,000 online sources daily. The database, powered by a proprietary analytic taxonomy, provides insight into what employers need much faster and in more detail than any other sources.

Career Insight™ is powered by the same rich dataset Burning Glass delivers to hundreds of leading corporate and education customers – from Microsoft and Accenture to Harvard University and Coursera.

More information is available at http://burning-glass.com/career-insight.

 


 

 

Faculty Learning Communities: Making the Connection, Virtually — from by Angela Atwell, Cristina Cottom, Lisa Martino, and Sara Ombres

Excerpt (emphasis DSC):

Research has shown that interactions with peers promotes faculty engagement (McKenna, Johnson, Yoder, Guerra, & Pimmel, 2016). Faculty learning communities (FLC) have become very popular in recent years. FLCs focus on improving teaching and learning practice through collaboration and community building (Cox, 2001). Usually, FLCs are face-to-face meetings hosted at a physical location at a specific date and time. We understand the benefit of this type of experience. However, we recognize online instructors will likely find it difficult to participate in a traditional FLC. So, we set out to integrate FLC principles to provide our faculty, living and working all over the globe, a similar experience.

Recently, our Center for Teaching and Learning Excellence took the plunge and offered a Virtual Faculty Learning Community (V-FLC) for instructors at our Worldwide Campus. The first experience was open only to adjunct instructors teaching online. The experience was asynchronous, lasted eight weeks, and focused on best practices for online teaching and learning. Within our Learning Management System, faculty led and participated in discussions around the topics that were of interest to them. Most topics focused on teaching practices and ways to enhance the online experience. However, other topics bridged the gap between teaching online and general best teaching practices. 

 

 

 

The 9 best online collaboration tools for remote workers — from invisionapp.com by Jes Kirkwood

Excerpts:

With this in mind, we asked remote workers from companies like Treehouse, Help Scout, Zapier, Buffer, and Zest to share their favorite online collaboration tools. Here’s what they said.

  1. Slack: The best team communication app
  2. Zoom: The best video conferencing app
  3. InVision: The best design collaboration app
  4. GitHub: The best software development tool
  5. Trello: The best project management software
  6. Dashlane: The best password manager
  7. Google Drive: The best file management app
  8. Zapier: Workflow automation for business
  9. World Time Buddy: Time converter for distributed teams

 

 

Where You’ll Find Virtual Reality Technology in 2018 — from avisystems.com by Alec Kasper-Olson

Excerpt:

The VR / AR / MR Breakdown
This year will see growth in a variety of virtual technologies and uses. There are differences and similarities between virtual, augmented, and mixed reality technologies. The technology is constantly evolving and even the terminology around it changes quickly, so you may hear variations on these terms.

Augmented reality is what was behind the Pokémon Go craze. Players could see game characters on their devices superimposed over images of their physical surroundings. Virtual features seemed to exist in the real world.

Mixed reality combines virtual features and real-life objects. So, in this way it includes AR but it also includes environments where real features seem to exist in a virtual world.

The folks over at Recode explain mixed reality this way:

In theory, mixed reality lets the user see the real world (like AR) while also seeing believable, virtual objects (like VR). And then it anchors those virtual objects to a point in real space, making it possible to treat them as “real,” at least from the perspective of the person who can see the MR experience.

And, virtual reality uses immersive technology to seemingly place a user into a simulated lifelike environment.

Where You’ll Find These New Realities
Education and research fields are at the forefront of VR and AR technologies, where an increasing number of students have access to tools. But higher education isn’t the only place you see this trend. The number of VR companies grew 250 percent between 2012 and 2017. Even the latest iPhones include augmented reality capabilities. Aside from the classroom and your pocket, here are some others places you’re likely to see VR and AR pop up in 2018.

 

 

 

Top AR apps that make learning fun — from bmsinnolabs.wordpress.com

Excerpt:

Here is a list of a few amazing Augmented Reality mobile apps for children:

  • Jigspace
  • Elements 4D
  • Arloon Plants
  • Math alive
  • PlanetAR Animals
  • FETCH! Lunch Rush
  • Quiver
  • Zoo Burst
  • PlanetAR Alphabets & Numbers

Here are few of the VR input devices include:

  • Controller Wands
  • Joysticks
  • Force Balls/Tracking Balls
  • Data Gloves
  • On-Device Control Buttons
  • Motion Platforms (Virtuix Omni)
  • Trackpads
  • Treadmills
  • Motion Trackers/Bodysuits

 

 

 

HTC VIVE and World Economic Forum Partner For The Future Of The “VR/AR For Impact” Initiative — from blog.vive.com by Matthew Gepp

Excerpt:

VR/AR for Impact experiences shown this week at WEF 2018 include:

  • OrthoVR aims to increase the availability of well-fitting prosthetics in low-income countries by using Virtual Reality and 3D rapid prototyping tools to increase the capacity of clinical staff without reducing quality. VR allows current prosthetists and orthosists to leverage their hands-on and embodied skills within a digital environment.
  • The Extraordinary Honey Bee is designed to help deepen our understanding of the honey bee’s struggle and learn what is at stake for humanity due to the dying global population of the honey bee. Told from a bee’s perspective, The Extraordinary Honey Bee harnesses VR to inspire change in the next generation of honey bee conservationists.
  • The Blank Canvas: Hacking Nature is an episodic exploration of the frontiers of bioengineering as taught by the leading researchers within the field. Using advanced scientific visualization techniques, the Blank Canvas will demystify the cellular and molecular mechanisms that are being exploited to drive substantial leaps such as gene therapy.
  • LIFE (Life-saving Instruction For Emergencies) is a new mobile and VR platform developed by the University of Oxford that enables all types of health worker to manage medical emergencies. Through the use of personalized simulation training and advanced learning analytics, the LIFE platform offers the potential to dramatically extend access to life-saving knowledge in low-income countries.
  • Tree is a critically acclaimed virtual reality experience to immerse viewers in the tragic fate that befalls a rainforest tree. The experience brings to light the harrowing realities of deforestation, one of the largest contributors to global warming.
  • For the Amazonian Yawanawa, ‘medicine’ has the power to travel you in a vision to a place you have never been. Hushuhu, the first woman shaman of the Yawanawa, uses VR like medicine to open a portal to another way of knowing. AWAVENA is a collaboration between a community and an artist, melding technology and transcendent experience so that a vision can be shared, and a story told of a people ascending from the edge of extinction.

 

 

 

Everything You Need To Know About Virtual Reality Technology — from yeppar.com

Excerpt:

Types of Virtual Reality Technology
We can segregate the type of Virtual Reality Technology according to their user experience

Non-Immersive
Non-immersive simulations are the least immersion implementation of Virtual Reality Technology.
In this kind of simulation, only a subset of the user’s senses is replicated, allowing for marginal awareness of the reality outside the VR simulation. A user enters into 3D virtual environments through a portal or window by utilizing standard HD monitors typically found on conventional desktop workstations.

Semi Immersive
In this simulation, users experience a more rich immersion, where a user partly, not fully involved in a virtual environment. Semi immersive simulations are based on high-performance graphical computing, which is often coupled with large screen projector systems or multiple TV projections to properly simulate the user’s visuals.

Fully immersive
Offers the full immersive experience to the user of Virtual Reality Technology, in this phase of VR head-mounted displays and motion sensing devices are used to simulate all of the user senses. In this situation, a user can experience the realistic virtual environment, where a user can experience a wide view field, high resolutions, increased refresh rates and a high quality of visualization through HMD.

 

 

 

 

 

 

This Is What A Mixed Reality Hard Hat Looks Like — from vrscout.com by  Alice Bonasio
A Microsoft-endorsed hard hat solution lets construction workers use holograms on site.

Excerpt:

These workers already routinely use technology such as tablets to access plans and data on site, but going from 2D to 3D at scale brings that to a whole new level. “Superimposing the digital model on the physical environment provides a clear understanding of the relations between the 3D design model and the actual work on a jobsite,” explained Olivier Pellegrin, BIM manager, GA Smart Building.

The application they are using is called Trimble Connect. It turns data into 3D holograms, which are then mapped out to scale onto the real-world environment. This gives workers an instant sense of where and how various elements will fit and exposes mistakes early on in the process.

 

Also see:

Trimble Connect for HoloLens is a mixed reality solution that improves building coordination by combining models from multiple stakeholders such as structural, mechanical and electrical trade partners. The solution provides for precise alignment of holographic data on a 1:1 scale on the job site, to review models in the context of the physical environment. Predefined views from Trimble Connect further simplify in-field use with quick and easy access to immersive visualizations of 3D data. Users can leverage mixed reality for training purposes and to compare plans against work completed. Advanced visualization further enables users to view assigned tasks and capture data with onsite measurement tools. Trimble Connect for HoloLens is available now through the Microsoft Windows App Store. A free trial option is available enabling integration with HoloLens. Paid subscriptions support premium functionality allowing for precise on-site alignment and collaboration. Trimble’s Hard Hat Solution for Microsoft HoloLens extends the benefits of HoloLens mixed reality into areas where increased safety requirements are mandated, such as construction sites, offshore facilities, and mining projects. The solution, which is ANSI-approved, integrates the HoloLens holographic computer with an industry-standard hard hat. Trimble’s Hard Hat Solution for HoloLens is expected to be available in the first quarter of 2018. To learn more, visit mixedreality.trimble.com.

 

From DSC:
Combining voice recognition / Natural Language Processing (NLP) with Mixed Reality should provide some excellent, powerful user experiences. Doing so could also provide some real-time understanding as well as highlight potential issues in current designs. It will be interesting to watch this space develop. If there were an issue, wouldn’t it be great to remotely ask someone to update the design and then see the updated design in real-time? (Or might there be a way to make edits via one’s voice and/or with gestures?)

I could see where these types of technologies could come in handy when designing / enhancing learning spaces.

 

 

 

Web-Powered Augmented Reality: a Hands-On Tutorial — from medium.com by Uri Shaked
A Guided Journey Into the Magical Worlds of ARCore, A-Frame, 3D Programming, and More!

Excerpt:

There’s been a lot of cool stuff happening lately around Augmented Reality (AR), and since I love exploring and having fun with new technologies, I thought I would see what I could do with AR and the Web?—?and it turns out I was able to do quite a lot!

Most AR demos are with static objects, like showing how you can display a cool model on a table, but AR really begins to shine when you start adding in animations!

With animated AR, your models come to life, and you can then start telling a story with them.

 

 

 

Art.com adds augmented reality art-viewing to its iOS app — from techcrunch.com by Lucas Matney

Excerpt:

If you’re in the market for some art in your house or apartment, Art.com will now let you use AR to put digital artwork up on your wall.

The company’s ArtView feature is one of the few augmented reality features that actually adds a lot to the app it’s put in. With the ARKit-enabled tech, the artwork is accurately sized so you can get a perfect idea of how your next purchase could fit on your wall. The feature can be used for the two million pieces of art on the site and can be customized with different framing types.

 

 

 

 

Experience on Demand is a must-read VR book — from venturebeat.com by Ian Hamilton

Excerpts:

Bailenson’s newest book, Experience on Demand, builds on that earlier work while focusing more clearly — even bluntly — on what we do and don’t know about how VR affects humans.

“The best way to use it responsibly is to be educated about what it is capable of, and to know how to use it — as a developer or a user — responsibly,” Bailenson wrote in the book.

Among the questions raised:

  • “How educationally effective are field trips in VR? What are the design principles that should guide these types of experiences?”
  • How many individuals are not meeting their potential because they lack the access to good instruction and learning tools?”
  • “When we consider that the subjects were made uncomfortable by the idea of administering fake electric shocks, what can we expect people will feel when they are engaging all sorts of fantasy violence and mayhem in virtual reality?”
  • “What is the effect of replacing social contact with virtual social contact over long periods of time?”
  • “How do we walk the line and leverage what is amazing about VR, without falling prey to the bad parts?”

 

 

 

 

 

What these firms all have in common are powerful digital platforms that provide the scale and scope to expand into new growth markets and geographies at speeds never before possible.

 

 


From DSC:
To me, the item below provides another example of the exponential pace of change that we are beginning to experience:


Corporate Longevity Forecast: The Pace of Creative Destruction is Accelerating — from innosight.com by Scott Anthony, S. Patrick Viguerie, Evan Schwartz and John Van Landeghem

Excerpt/Executive Summary:

Few companies are immune to the forces of creative destruction. Our corporate longevity forecast of S&P 500 companies anticipates average tenure on the list growing shorter and shorter over the next decade.

Key insights include:

  • The 33-year average tenure of companies on the S&P 500 in 1964 narrowed to 24 years by 2016 and is forecast to shrink to just 12 years by 2027 (Chart 1).
  • Record private equity activity, a robust M&A market, and the growth of startups with billion-dollar valuations are leading indicators of future turbulence.
  • A gale force warning to leaders: at the current churn rate, about half of S&P 500 companies will be replaced over the next ten years.
  • Retailers were especially hit hard by disruptive forces, and there are strong signs of restructuring in financial services, healthcare, energy, travel, and real estate.
  • The turbulence points to the need for companies to embrace a dual transformation, to focus on changing customer needs, and other strategic interventions.

 


Are Corporations Ready for Increased Turbulence?

Viewed as a larger picture, S&P 500 turnover serves as a barometer for marketplace change. Shrinking lifespans of companies on the list are in part driven by a complex combination of technology shifts and economic shocks, some of which are beyond the control of corporate leaders. But frequently, companies miss opportunities to adapt or take advantage of these changes. For example, they continue to apply existing business models to new markets, are slow to respond to disruptive competitors in low-profit segments, or fail to adequately envision and invest in new growth areas which often takes a decade or longer to pay off.

At the same time, we’ve seen the rise of other companies take their place on the list by creating new products, business models, and serving new customers. Some of the market forces driving these exits and entries include the mass disruption in retail, the rising dominance of digital technology platforms, the downward pressure on energy prices, strength in global travel and real estate, as well as the failure of stock buyback efforts to improve performance.

 

 

 

 

From DSC:
DC: Will Amazon get into delivering education/degrees? Is is working on a next generation learning platform that could highly disrupt the world of higher education? Hmmm…time will tell.

But Amazon has a way of getting into entirely new industries. From its roots as an online bookseller, it has branched off into numerous other arenas. It has the infrastructure, talent, and the deep pockets to bring about the next generation learning platform that I’ve been tracking for years. It is only one of a handful of companies that could pull this type of endeavor off.

And now, we see articles like these:


Amazon Snags a Higher Ed Superstar — from insidehighered.com by Doug Lederman
Candace Thille, a pioneer in the science of learning, takes a leave from Stanford to help the ambitious retailer better train its workers, with implications that could extend far beyond the company.

Excerpt:

A major force in the higher education technology and learning space has quietly begun working with a major corporate force in — well, in almost everything else.

Candace Thille, a pioneer in learning science and open educational delivery, has taken a leave of absence from Stanford University for a position at Amazon, the massive (and getting bigger by the day) retailer.

Thille’s title, as confirmed by an Amazon spokeswoman: director of learning science and engineering. In that capacity, the spokeswoman said, Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon.”

No further details were forthcoming, and Thille herself said she was “taking time away” from Stanford to work on a project she was “not really at liberty to discuss.”

 

Amazon is quietly becoming its own university — from qz.com by Amy Wang

Excerpt:

Jeff Bezos’ Amazon empire—which recently dabbled in home security, opened artificial intelligence-powered grocery stores, and started planning a second headquarters (and manufactured a vicious national competition out of it)—has not been idle in 2018.

The e-commerce/retail/food/books/cloud-computing/etc company made another move this week that, while nowhere near as flashy as the above efforts, tells of curious things to come. Amazon has hired Candace Thille, a leader in learning science, cognitive science, and open education at Stanford University, to be “director of learning science and engineering.” A spokesperson told Inside Higher Ed that Thille will work “with our Global Learning Development Team to scale and innovate workplace learning at Amazon”; Thille herself said she is “not really at liberty to discuss” her new project.

What could Amazon want with a higher education expert? The company already has footholds in the learning market, running several educational resource platforms. But Thille is famous specifically for her data-driven work, conducted at Stanford and Carnegie Mellon University, on nontraditional ways of learning, teaching, and training—all of which are perfect, perhaps even necessary, for the education of employees.

 


From DSC:
It could just be that Amazon is simply building its own corporate university and will stay focused on developing its own employees and its own corporate learning platform/offerings — and/or perhaps license their new platform to other corporations.

But from my perspective, Amazon continues to work on pieces of a powerful puzzle, one that could eventually involve providing learning experiences to lifelong learners:

  • Personal assistants
  • Voice recognition / Natural Language Processing (NLP)
  • The development of “skills” at an incredible pace
  • Personalized recommendation engines
  • Cloud computing and more

If Alexa were to get integrated into a AI-based platform for personalized learning — one that features up-to-date recommendation engines that can identify and personalize/point out the relevant critical needs in the workplace for learners — better look out higher ed! Better look out if such a platform could interactively deliver (and assess) the bulk of the content that essentially does the heavy initial lifting of someone learning about a particular topic.

Amazon will be able to deliver a cloud-based platform, with cloud-based learner profiles and blockchain-based technologies, at a greatly reduced cost. Think about it. No physical footprints to build and maintain, no lawns to mow, no heating bills to pay, no coaches making $X million a year, etc.  AI-driven recommendations for digital playlists. Links to the most in demand jobs — accompanied by job descriptions, required skills & qualifications, and courses/modules to take in order to master those jobs.

Such a solution would still need professors, instructional designers, multimedia specialists, copyright experts, etc., but they’ll be able to deliver up-to-date content at greatly reduced costs. That’s my bet. And that’s why I now call this potential development The New Amazon.com of Higher Education.

[Microsoft — with their purchase of Linked In (who had previously
purchased Lynda.com) — is
another such potential contender.]

 

 

 

AI plus human intelligence is the future of work — from forbes.com by Jeanne Meister

Excerpts:

  • 1 in 5 workers will have AI as their co worker in 2022
  • More job roles will change than will be become totally automated so HR needs to prepare today


As we increase our personal usage of chatbots (defined as software which provides an automated, yet personalized, conversation between itself and human users), employees will soon interact with them in the workplace as well. Forward looking HR leaders are piloting chatbots now to transform HR, and, in the process, re-imagine, re-invent, and re-tool the employee experience.

How does all of this impact HR in your organization? The following ten HR trends will matter most as AI enters the workplace…

The most visible aspect of how HR is being impacted by artificial intelligence is the change in the way companies source and recruit new hires. Most notably, IBM has created a suite of tools that use machine learning to help candidates personalize their job search experience based on the engagement they have with Watson. In addition, Watson is helping recruiters prioritize jobs more efficiently, find talent faster, and match candidates more effectively. According to Amber Grewal, Vice President, Global Talent Acquisition, “Recruiters are focusing more on identifying the most critical jobs in the business and on utilizing data to assist in talent sourcing.”

 

…as we enter 2018, the next journey for HR leaders will be to leverage artificial intelligence combined with human intelligence and create a more personalized employee experience.

 

 

From DSC:
Although I like the possibility of using machine learning to help employees navigate their careers, I have some very real concerns when we talk about using AI for talent acquisition. At this point in time, I would much rather have an experienced human being — one with a solid background in HR — reviewing my resume to see if they believe that there’s a fit for the job and/or determine whether my skills transfer over from a different position/arena or not. I don’t think we’re there yet in terms of developing effective/comprehensive enough algorithms. It may happen, but I’m very skeptical in the meantime. I don’t want to be filtered out just because I didn’t use the right keywords enough times or I used a slightly different keyword than what the algorithm was looking for.

Also, there is definitely age discrimination occurring out in today’s workplace, especially in tech-related positions. Folks who are in tech over the age of 30-35 — don’t lose your job! (Go check out the topic of age discrimination on LinkedIn and similar sites, and you’ll find many postings on this topic — sometimes with 10’s of thousands of older employees adding comments/likes to a posting). Although I doubt that any company would allow applicants or the public to see their internally-used algorithms, how difficult would it be to filter out applicants who graduated college prior to ___ (i.e., some year that gets updated on an annual basis)? Answer? Not difficult at all. In fact, that’s at the level of a Programming 101 course.

 

 

 

Artificial intelligence is going to supercharge surveillance – from theverge.com by James Vincent
What happens when digital eyes get the brains to match?

From DSC:
Persons of interest” comes to mind after reading this article. Persons of interest is a clever, well done show, but still…the idea of combining surveillance w/ a super intelligent is a bit unnerving.

 

 

 

Artificial intelligence | 2018 AI predictions — from thomsonreuters.com

Excerpts:

  • AI brings a new set of rules to knowledge work
  • Newsrooms embrace AI
  • Lawyers assess the risks of not using AI
  • Deep learning goes mainstream
  • Smart cars demand even smarter humans
  • Accountants audit forward
  • Wealth managers look to AI to compete and grow

 

 

 

Chatbots and Virtual Assistants in L&D: 4 Use Cases to Pilot in 2018 —  from bottomlineperformance.com by Steven Boller

Excerpt:

  1. Use a virtual assistant like Amazon Alexa or Google Assistant to answer spoken questions from on-the-go learners.
  2. Answer common learner questions in a chat window or via SMS.
  3. Customize a learning path based on learners’ demographic information.
  4. Use a chatbot to assess learner knowledge.

 

 

 

Suncorp looks to augmented reality for insurance claims — from itnews.com.au by Ry Crozier with thanks to Woontack Woo for this resource

Excerpts:

Suncorp has revealed it is exploring image recognition and augmented reality-based enhancements for its insurance claims process, adding to the AI systems it deployed last year.

The insurer began testing IBM Watson software last June to automatically determine who is at fault in a vehicle accident.

“We are working on increasing our use of emerging technologies to assist with the insurance claim process, such as using image recognition to assess type and extent of damage, augmented reality that would enable an off-site claims assessor to discuss and assess damage, speech recognition, and obtaining telematic data from increasingly automated vehicles,” the company said.

 

 

 

6 important AI technologies to look out for in 2018 — from itproportal.com by  Olga Egorsheva
Will businesses and individuals finally make AI a part of their daily lives?

 

 

 

 

 

 

The next era of human|machine partnerships
From delltechnologies.com by the Institute for the Future and Dell Technologies

 


From DSC:
Though this outlook report paints a rosier picture than I think we will actually encounter, there are several interesting perspectives in this report. We need to be peering out into the future to see which trends and scenarios are most likely to occur…then plan accordingly. With that in mind, I’ve captured a few of the thoughts below.


 

At its inception, very few people anticipated the pace at which the internet would spread across the world, or the impact it would have in remaking business and culture. And yet, as journalist Oliver Burkeman wrote in 2009, “Without most of us quite noticing when it happened, the web went from being a strange new curiosity to a background condition of everyday life.”1

 

In Dell’s Digital Transformation Index study, with 4,000 senior decision makers across the world, 45% say they are concerned about becoming obsolete in just 3-5 years, nearly half don’t know what their industry will look like in just three years’ time, and 73% believe they need to be more ‘digital’ to succeed in the future.

With this in mind, we set out with 20 experts to explore how various social and technological drivers will influence the next decade and, specifically, how emerging technologies will recast our society and the way we conduct business by the year 2030. As a result, this outlook report concludes that, over the next decade, emerging technologies will underpin the formation of new human-machine partnerships that make the most of their respective complementary strengths. These partnerships will enhance daily activities around the coordination of resources and in-the-moment learning, which will reset expectations for work and require corporate structures to adapt to the expanding capabilities of human-machine teams.


For the purpose of this study, IFTF explored the impact that Robotics, Artificial Intelligence (AI) and Machine Learning, Virtual Reality (VR) and Augmented Reality (AR), and Cloud Computing, will have on society by 2030. These technologies, enabled by significant advances in software, will underpin the formation of new human-machine partnerships.

On-demand access to AR learning resources will reset expectations and practices around workplace training and retraining, and real-time decision-making will be bolstered by easy access to information flows. VR-enabled simulation will immerse people in alternative scenarios, increasing empathy for others and preparation for future situations. It will empower the internet of experience by blending physical and virtual worlds.

 

Already, the number of digital platforms that are being used to orchestrate either physical or human resources has surpassed 1,800.9 They are not only connecting people in need of a ride with drivers, or vacationers with a place to stay, but job searchers with work, and vulnerable populations with critical services. The popularity of the services they offer is introducing society to the capabilities of coordinating technologies and resetting expectations about the ownership of fixed assets.

 

Human-machine partnerships won’t spell the end of human jobs, but work will be vastly different.

The U.S. Bureau of Labor Statistics says that today’s learners will have 8 to 10 jobs by the time they are 38. Many of them will join the workforce of freelancers. Already 50 million strong, freelancers are projected to make up 50% of the workforce in the United States by 2020.12 Most freelancers will not be able to rely on traditional HR departments, onboarding processes, and many of the other affordances of institutional work.

 

By 2030, in-the-moment learning will become the modus operandi, and the ability to gain new knowledge will be valued higher than the knowledge people already have.

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

“Rise of the machines” — from January 2018 edition of InAVate magazine
AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.

 

 


From DSC:
Learning spaces are relevant as well in the discussion of AI and AV-related items.


 

Also in their January 2018 edition, see
an incredibly detailed project at the London Business School.

Excerpt:

A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.

 


 

Also relevant, see:

 


 

 

From Elliott Masie’s Learning TRENDS – January 3, 2018.
#986 – Updates on Learning, Business & Technology Since 1997.

2. Curation in Action – Meural Picture Frame of Endless Art. 
What a cool Curation Holiday Gift that arrived.  The Meural Picture Frame is an amazing digital display, 30 inches by 20 inches, that will display any of over 10,000 classical or modern paintings or photos from the world’s best museums.

A few minutes of setup to the WiFi and my Meural became a highly personalized museum in the living room.  I selected collections of an era, a specific artist, a theme or used someone else’s art “playlist”.

It is curation at its best!  A personalized and individualized selection from an almost limitless collection.  Check it out at http://www.meural.com

 



Also see:



 

Discover new art every day with Meural

 

 

Discover new artwork with Meural -- you can browse playlists of artwork and/or add your own

 

 

 

 

From DSC:
As I understand it, you can upload your own artwork and photography into this platform. As such, couldn’t we put such devices/frames in schools?!

Wouldn’t it be great to have each classroom’s artwork available as a playlist?! And not just the current pieces, but archived pieces as well!

Wouldn’t it be cool to be able to walk down the hall and swish through a variety of pieces?

Wouldn’t such a dynamic, inspirational platform be a powerful source of creativity in our hallways?  The frames could display the greatest works of art from around the world!

Wouldn’t such a platform give young/budding artists and photographers incentive to do their best work, knowing many others can see their creative works as a part of a playlist?

Wouldn’t it be cool to tap into such a service and treasure chest of artwork and photography via your Smart/Connected TV?

Here’s to creativity!

 

 

 

 

 

 

DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from phys.org by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.

 

Also related/see:

 

  • TeachLive.org
    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.

 

 

 

 

 

This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.

 

This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.

 

 


From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.

 

I’m tracking these developments at:
http://danielschristian.com/thelivingclassroom/

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


*  Also see:


Blockchain, Bitcoin and the Tokenization of Learning — from edsurge.com by Sydney Johnson

Excerpt:

In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.

 

 

 

 

Cisco:
“Utilize virtual showrooms | See stores in your living room.”

DC:
If this is how retail could go, what might be the ramifications for learning-related environments & for learners’ expectations?

See Cisco’s whitepaper and the vision that I’m tracking along these lines.

 

Utilize virtual showrooms|See stores in your living room.

 

From DSC:
Looking at the trends below, again I wonder…how might learners’ expectations be impacted by these developments on the landscapes?

 

Customer Experience in 2020 according to Cisco

 

 


Also see:


 

The Living [Class] Room -- by Daniel Christian

 

 

 

Alexa, how can you improve teaching and learning? — from edscoop.com by Kate Roddy with thanks to eduwire for their post on this
Special report:? Voice command platforms from Amazon, Google and Microsoft are creating new models for learning in K-12 and higher education — and renewed privacy concerns.

Excerpt:

We’ve all seen the commercials: “Alexa, is it going to rain today?” “Hey, Google, turn up the volume.” Consumers across the globe are finding increased utility in voice command technology in their homes. But dimming lights and reciting weather forecasts aren’t the only ways these devices are being put to work.

Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.

The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses to foster greater social connection.

Like many technologies, the emerging presence of voice command devices in classrooms and at universities is also raising concerns about student privacy and unnatural dependence on digital tools. Yet, many educators interviewed for this report said the rise of voice command technology in education is inevitable — and welcome.

“One example,” he said, “is how voice dictation helped a student with dysgraphia. Putting the pencil and paper in front of him, even typing on a keyboard, created difficulties for him. So, when he’s able to speak to the device and see his words on the screen, the connection becomes that much more real to him.”

The use of voice dictation has also been beneficial for students without disabilities, Miller added. Through voice recognition technology, students at Taos Academy Charter School are able to perceive communication from a completely new medium.

 

 

 

The Beatriz Lab - A Journey through Alzheimer's Disease

This three-part lab can be experienced all at once or separately. At the beginning of each part, Beatriz’s brain acts as an omniscient narrator, helping learners understand how changes to the brain affect daily life and interactions.

Pre and post assessments, along with a facilitation guide, allow learners and instructors to see progression towards outcomes that are addressed through the story and content in the three parts, including:

1) increased knowledge of Alzheimer’s disease and the brain
2) enhanced confidence to care for people with Alzheimer’s disease
3) improvement in care practice

Why a lab about Alzheimer’s Disease?
The Beatriz Lab is very important to us at Embodied Labs. It is the experience that inspired the start of our company. We believe VR is more than a way to evoke feelings of empathy; rather, it is a powerful behavior change tool. By taking the perspective of Beatriz, healthcare professionals and trainees are empowered to better care for people with Alzheimer’s disease, leading to more effective care practices and better quality of life. Through embodying Beatriz, you will gain insight into life with Alzheimer’s and be able to better connect with and care for your loved ones, patients, clients, or others in this communities who live with the disease every day. In our embodied VR experience, we hope to portray both the difficult and joyful moments — the disease surely is a mix of both.

Watch our new promo video to learn more!

 

 

As part of the experience, you will take a 360 degree trip into Beatriz’s brain,
and visit a neuron “forest” that is being affected by amyloid beta plaques and tau proteins.

 

From DSC:
I love the work that Carrie Shaw and @embodiedLabs are doing! Thanks Carrie & Company!

 

 

 

Top 7 Business Collaboration Conference Apps in Virtual Reality (VR) — from vudream.com by Ved Pitre

Excerpt (emphasis DSC):

As VR continues to grow and improve, the experiences will feel more real. But for now, here are the best business conference applications in virtual reality.

 

 

 

Final Cut Pro X Arrives With 360 VR Video Editing — from vrscount.com by Jonathan Nafarrete

Excerpt:

A sign of how Apple is supporting VR in parts of its ecosystem, Final Cut Pro X (along with Motion and Compressor), now has a complete toolset that lets you import, edit, and deliver 360° video in both monoscopic and stereoscopic formats.

Final Cut Pro X 10.4 comes with a handful of slick new features that we tested, such as advanced color grading and support for High Dynamic Range (HDR) workflows. All useful features for creators, not just VR editors, especially since Final Cut Pro is used so heavily in industries like video editing and production. But up until today, VR post-production options have been minimal, with no support from major VR headsets. We’ve had options with Adobe Premiere plus plugins, but not everyone wants to be pigeon-holed into a single software option. And Final Cut Pro X runs butter smooth on the new iMac, so there’s that.

Now with the ability to create immersive 360° films right in Final Cut Pro, an entirely new group of creators have the ability to dive into the world of 360 VR video. Its simple and intuitive, something we expect from an Apple product. The 360 VR toolset just works.

 

 

 

See Original, Exclusive Star Wars Artwork in VR — from vrscount.com by Alice Bonasio

 

Excerpt:

HWAM’s first exhibition is a unique collection of Star Wars production pieces, including the very first drawings made for the film franchise and never-before-seen production art from the original trilogy by Lucasfilm alum Joe Johnston, Ralph McQuarrie, Phil Tippett, Drew Struzan, Colin Cantwell, and more.

 

 

 

Learning a language in VR is less embarrassing than IRL — from qz.com by Alice Bonasio

Excerpt:

Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?

VR is the ultimate medium for delivering what is known as “experiential learning.” This education theory is based on the idea that we learn and remember things much better when doing something ourselves than by merely watching someone else do it or being told about it.

The immersive nature of VR means users remember content they interact with in virtual scenarios much more vividly than with any other medium. (According to experiments carried out by professor Ann Schlosser at the University of Washington, VR even has the capacity to prompt the development of false memories.)

 

 

Since immersion is a key factor in helping students not only learn much faster but also retain what they learn for longer, these powers can be harnessed in teaching and training—and there is also research that indicates that VR is an ideal tool for learning a language.

 

 


Addendum on 12/20/17:

 


 

 

 
© 2025 | Daniel Christian