McKinsey: automation may wipe out 1/3 of America’s workforce by 2030 — from axios.com by Steve LeVine

Excerpt (emphasis DSC):

In a new study that is optimistic about automation yet stark in its appraisal of the challenge ahead, McKinsey says massive government intervention will be required to hold societies together against the ravages of labor disruption over the next 13 years. Up to 800 million people—including a third of the work force in the U.S. and Germany—will be made jobless by 2030, the study says.

The bottom line: The economy of most countries will eventually replace the lost jobs, the study says, but many of the unemployed will need considerable help to shift to new work, and salaries could continue to flatline. “It’s a Marshall Plan size of task,” Michael Chui, lead author of the McKinsey report, tells Axios.

In the eight-month study, the McKinsey Global Institute, the firm’s think tank, found that almost half of those thrown out of work—375 million people, comprising 14% of the global work force—will have to find entirely new occupations, since their old one will either no longer exist or need far fewer workers. Chinese will have the highest such absolute numbers—100 million people changing occupations, or 12% of the country’s 2030 work force.

I asked Chui what surprised him the most of the findings. “The degree of transition that needs to happen over time is a real eye opener,” he said.

 

The transition compares to the U.S. shift from a largely agricultural to an industrial-services economy in the early 1900s forward. But this time, it’s not young people leaving farms, but mid-career workers who need new skills.

 

 

From DSC:
Higher education — and likely (strictly) vocational training outside of higher ed — is simply not ready for this! MAJOR reinvention will be necessary, and as soon as 2018 according to Forrester Research. 

One of the key values that institutions of traditional higher education can bring to the table is to help people through this gut wrenching transition — identifying which jobs are going to last for the next 5-10+ years and which ones won’t, and then be about the work of preparing the necessary programs quickly enough to meet the demands of the new economy.

Students/entrepreneurs out there, they say you should look around to see where the needs are and then develop products and/or services to meet those needs. Well, here you go!

 

 

 

As a member of the International Education Committee, at edX we are extremely aware of the changing nature of work and jobs. It is predicted that 50 percent of current jobs will disappear by 2030.

Anant Agarwal, CEO and Founder of edX, and Professor of
Electrical Engineering and Computer Science at MIT
(source)

 

 

 

Addendum:

Automation threatens 800 million jobs, but technology could still save us, says report — from theverge.com by James Vincent
New analysis says governments need to act now to help a labor force in flux

Excerpt:

A new report predicts that by 2030, as many as 800 million jobs could be lost worldwide to automation. The study, compiled by the McKinsey Global Institute, says that advances in AI and robotics will have a drastic effect on everyday working lives, comparable to the shift away from agricultural societies during the Industrial Revolution. In the US alone, between 39 and 73 million jobs stand to be automated — making up around a third of the total workforce.

 

If a computer can do one-third of your job, what happens next? Do you get trained to take on new tasks, or does your boss fire you, or some of your colleagues? What if you just get a pay cut instead? Do you have the money to retrain, or will you be forced to take the hit in living standards?

 

 

A game to help students pay the right price for college — from nytimes.com by Ron Lieber

Excerpt:

In the last big economic downturn, back when Tim Ranzetta was in the student loan analysis and consulting business and working with colleges, borrowers often found their way to him, too.

There would be tears. And he would get off the phone with the same frustration each time over how little the people who actually use them know about student loans.

Starting this week, he has a new tool in what has become a yearslong campaign to fill that gap: a free, interactive, web-based game called Payback. In playing, students see running totals of their debt but can also track academic focus, the connections they’re making that could be useful later and their overall happiness — crucial factors in actually finishing college and graduating with a job that can help them repay their debt.

 

Also see:

 

 

 

 


From DSC:
Audrey Willis, with Circa Interactive, reminded me that next week is Computer Science Education Week. She wrote to me with the following additional resources:


 

As you may know, Computer Science Education Week starts next week on December 4. This week aims to raise awareness of the need to bolster computer science education around the world by encouraging teachers and students to host computer science events throughout the week. These events can include teacher-guided lesson plans, participating in the Hour of Code, watching computer science videos, or using your own resources to help inspire interest among students. It is for this reason that I wanted to share a few computer science resources with you that were just published by renowned universities. I believe these resources can provide K-12 students with valuable information about different career fields that an interest in computer science can lead to, from education and health information management, to electrical engineering.

Thanks in advance,
Audrey Willis
Circa Interactive

 

 

 

 

High-Tech, High Touch: Digital Learning Report and Workbook, 2017 Edition — from Intentional Futures, with thanks to Maria Andersen on Linkedin for her posting therein which was entitled, “Spectrums to Measure Digital Learning
Excerpt (emphasis DSC):

Our work uncovered five high-tech strategies employed by institutions that have successfully implemented digital learning at scale across a range of modalities. The strategies that underscore the high-tech, high-touch connection are customizing through technology, leveraging adaptive courseware, adopting cost-efficient resources, centralizing course development and making data-driven decisions.

Although many of the institutions we studied are employing more than one of these strategies, in this report we have grouped the institutional use cases according to the strategy that has been most critical to achieving digital learning at scale. As institutional leaders make their way through this document, they should watch for strategies that target challenges similar to those they hope to solve. Reading the corresponding case studies will unpack how institutions employed these strategies effectively.

Digital learning in higher education is becoming more ubiquitous as institutions realize its ability to support student success and empower faculty. Growing diversity in student demographics has brought related changes in student needs, prompting institutions to look to technology to better serve their students. Digital courseware gives institutions the ability to build personalized, accessible and engaging content. It enables educators to provide relevant content and interventions for individual students, improve instructional techniques based on data and distribute knowledge to a wider audience (MIT Office of Digital Learning, 2017).

PARTICIPATION IN DIGITAL LEARNING IS GROWING
Nationally, the number of students engaged in digital learning is growing rapidly. One driver of this growth is rising demand for distance learning, which often relies on digital learning environments. Distance learning programs saw enrollment increases of approximately 4% between 2015 and 2016, with nearly 30% of higher education students taking at least one digital distance learning course (Allen, 2017). Much of this growth is occurring at the undergraduate level (Allen, 2017). The number of students who take distance learning courses exclusively is growing as well. Between 2012 and 2015, both public and private nonprofit institutions saw an increase in students taking only distance courses, although private, for-profit institutions have seen a decrease (Allen, 2017).

 

 

 

 

 

 

 

Augmented reality will transform city life — from venturebeat.com by Michael Park

Excerpts:

I’ve interviewed three AR entrepreneurs who explain three key ways that AR is set to transform urban living.

  • The real world will be indexed
  • Commuting will be smarter and safer
  • Language will be less of a barrier

 

 

 

Virtual Reality Devices – Where They Are Now and Where They’re Going — from iqsdirectory.com

Excerpts:

The questions now are:

  • What are the actual VR devices available ?
  • Are they reasonably priced?
  • What do they do?
  • What are they going to do?

We try to answer those questions [here in this article].

In this early stage, the big question becomes, “What’s next?”.

  • Integration of non-VR devices with VR users
  • Move away from needing a top-notch PC (or any PC)
  • Controllers will be your hands

 

 

Alibaba-backed augmented reality start-up makes driving look like a video game — from cnbc.com by Robert Ferris

  • WayRay makes augmented reality hardware and software for cars and drivers.
  • The company won a start-up competition at the Los Angeles Auto Show.
  • WayRay has also received an investment from Alibaba.

 

 

WayRay’s augmented reality driving system makes a car’s windshield look like a video game. The Swiss-based company that makes augmented reality for cars won the grand prize in a start-up competition at the Los Angeles Auto Show on Tuesday. WayRay makes a small device called Navion, which projects a virtual dashboard onto a driver’s windshield. The software can display information on speed, time of day, or even arrows and other graphics that can help the driver navigate, avoid hazards, and warn of dangers ahead, such as pedestrians. WayRay says that by displaying information directly on the windshield, the system allows drivers to stay better focused on the road. The display might appear similar to what a player would see on a screen in many video games. But the system also notifies the driver of potential points of interest along a route such as restaurants or other businesses.

 

 

 

HTC’s VR arts program brings exhibits to your home — from engadget.com by Jon Fingas
Vive Arts helps creators produce and share work in VR.

Exerpt:

Virtual reality is arguably a good medium for art: it not only enables creativity that just isn’t possible if you stick to physical objects, it allows you to share pieces that would be difficult to appreciate staring at an ordinary computer screen. And HTC knows it. The company is launching Vive Arts, a “multi-million dollar” program that helps museums and other institutions fund, develop and share art in VR. And yes, this means apps you can use at home… including one that’s right around the corner.

 

 

 

VR at the Tate Modern’s Modigliani exhibition is no gimmick — from engadget.com by Jamie Rigg
‘The Ochre Atelier’ experience is an authentic addition.

Excerpt:

There are no room-scale sensors or controllers, because The Ochre Atelier, as the experience is called, is designed to be accessible to everyone regardless of computing expertise. And at roughly 6-7 minutes long, it’s also bite-size enough that hopefully every visitor to the exhibition can take a turn. Its length and complexity don’t make it any less immersive though. The experience itself is, superficially, a tour of Modigliani’s last studio space in Paris: a small, thin rectangular room a few floors above street level.

In all, it took five months to digitally re-create the space. A wealth of research went into The Ochre Atelier, from 3D mapping the actual room — the building is now a bed-and-breakfast — to looking at pictures and combing through first-person accounts of Modigliani’s friends and colleagues at the time. The developers at Preloaded took all this and built a historically accurate re-creation of what the studio would’ve looked like. You teleport around this space a few times, seeing it from different angles and getting more insight into the artist at each stop. Look at a few obvious “more info” icons from each perspective and you’ll hear narrated the words of those closest to Modigliani at the time, alongside some analyses from experts at the Tate.

 

 

 

Real human holograms for augmented, virtual and mixed reality — from 8i.com; with thanks to Lisa Dawley for her Tweet on this
Create, distribute and experience volumetric video of real people that look and feel as if they’re in the same room.

 

 

 

Next-Gen Virtual Reality Will Let You Create From Scratch—Right Inside VR — from autodesk.com by Marcello Sgambelluri
The architecture, engineering and construction (AEC) industry is about to undergo a radical shift in its workflow. In the near future, designers and engineers will be able to create buildings and cities, in real time, in virtual reality (VR).

Excerpt:

What’s Coming: Creation
Still, these examples only scratch the surface of VR’s potential in AEC. The next big opportunity for designers and engineers will move beyond visualization to actually creating structures and products from scratch in VR. Imagine VR for Revit: What if you could put on an eye-tracking headset and, with the movement of your hands and wrists, grab a footing, scale a model, lay it out, push it, spin it, and change its shape?

 

 

 

AI: Embracing the promises and realities — from the Allegis Group

Excerpts:

What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:

  • According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses.  (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
  • The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
  • 47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
  • In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.

Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30

While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.

 

 

 

 

 

The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.

 

 

 

 

Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.

 

 

 

 

The Impact of Alexa and Google Home on Consumer Behavior — from chatbotsmagazine.com by Arte Merritt

Excerpt (emphasis DSC):

2017 has turned out to be the year of voice. Amazon Alexa passed over 10 million unit sales earlier in the year and there are over 24,000 Skills in the store. With the addition of new devices like the Echo Show, Echo Plus, improved Echo Dot, and a new form factor for the Echo, there’s an option for everyone’s budget. Google is right there as well with the addition of the Google Mini to go along with the original Google Home. Apple’s efforts with Siri and HomePod, Samsung’s Bixby, and Microsoft’s Cortana round out the major tech firms efforts in this space.

 

 

Also see:

 

Amazon Alexa Store -- has over 24,000 skills as of November 29, 2017

 

 

 

How to be an ed tech futurist — from campustechnology.com by Bryan Alexander
While no one can predict the future, these forecasting methods will help you anticipate trends and spur more collaborative thinking.

Excerpts:

Some of the forecasting methods Bryan mentions are:

  • Trend analysis
  • Environmental scanning
  • Scenarios
  • Science fiction

 

 

 

 

From DSC:
I greatly appreciate the work that Bryan does — the topics that he chooses to write about, his analyses, comments, and questions are often thought-provoking. I couldn’t agree more with Bryan’s assertion that forecasting needs to become more realized/practiced within higher education. This is especially true given the exponential rate of change that many societies throughout the globe are now experiencing.

We need to be pulse-checking a variety of landscapes out there, to identify and put significant trends, forces, and emerging technologies on our radars. The strategy of identifying potential scenarios – and then developing responses to those potential scenarios — is very wise.

 

 

 

 

 

 

 

 

 

 

 

 

 

From DSC:
I’m posting this in an effort to:

  • Help students learn how to learn
  • Help students achieve the greatest possible returns on their investments (both their $$ and their time) when they are trying to learn about new things

I’d like to thank Mr. William Knapp, Executive Director at GRCC for Distance Learning & Instructional Technology, for sharing this resource on Twitter.


A better way to study through self-testing and distributed practice — from kqed.org

Excerpts (emphasis DSC):

As I prepared to write this column, I relied on some pretty typical study techniques. First, as I’ve done since my student days, I generously highlighted key information in my background reading. Along the way, I took notes, many of them verbatim, which is a snap with digital copying and pasting. (Gotta love that command-C, command-V.) Then I reread my notes and highlights. Sound familiar? Students everywhere embrace these techniques and yet, as it turns out, they are not particularly good ways to absorb new material. At least not if that’s all you do.

Researchers have devoted decades to studying how to study. The research literature is frankly overwhelming. Luckily for all of us, the journal Psychological Science in the Public Interest published a review article a few years ago that remains the most comprehensive guide out there. Its 47 pages hold valuable lessons for learners of any age and any subject — especially now, with end-of-semester exams looming.

The authors examined ten different study techniques, including highlighting, rereading, taking practice tests, writing summaries, explaining the content to yourself or another person and using mnemonic devices. They drew on the results of nearly 400 prior studies. Then, in an act of boldness not often seen in academic research, they actually awarded ratings: high, low or moderate utility.

The study strategies that missed the top rating weren’t necessarily ineffective, explains the lead author John Dunlosky, a psychology professor at Kent State University, but they lacked sufficient evidence of efficacy, or were proven useful only in certain areas of study or with certain types of students. “We were trying to find strategies that have a broad impact across all domains for all students,” Dunlosky says, “so it was a pretty tough rating scale.”

 

In fact, only two techniques got the top rating: practice testing and “distributed practice,” which means scheduling study activities over a period of time — the opposite of cramming.

Practice testing can take many forms: flashcards, answering questions at the end of a textbook chapter, tackling review quizzes online. Research shows it works well for students from preschool through graduate and professional education.

Testing yourself works because you have to make the effort to pull information from your memory — something we don’t do when we merely review our notes or reread the textbook.


As for distributed practice vs. cramming, Dunlosky and his fellow authors write that “cramming is better than not studying at all,” but if you are going to devote four or five hours to studying for your biology mid-term, you would you be far better off spacing them out over a several days or weeks. “You get much more bang for your buck if you space,” Dunlosky told me.

 

 

Also see:

Improving Students’ Learning With Effective Learning Techniques — from journals.sagepub.com by John Dunlosky, Katherine A. Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham
Promising Directions From Cognitive and Educational Psychology

Excerpt:

In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice.

 

 

 

In fact, only two techniques got the top rating: practice testing and “distributed practice,” which means scheduling study activities over a period of time — the opposite of cramming.

 

 

From DSC:
This is yet another reason that I like the approach of using streams of content to help people learn something new. Because you can implement distributed practice, encourage recall, etc. when you put the content out there at regular intervals.

 

 

 

How artificial intelligence could transform government — from Deloitte University Press
Cognitive technologies have the potential to revolutionize the public sector—and save billions of dollars

Excerpt:

The rise of more sophisticated cognitive technologies is, of course, critical to that third era, aiding advances in several categories:

  • Rules-based systems capture and use experts’ knowledge to provide answers to tricky but routine problems. As this decades-old form of AI grows more sophisticated, users may forget they aren’t conversing with a real person.
  • Speech recognition transcribes human speech automatically and accurately. The technology is improving as machines collect more examples of conversation. This has obvious value for dictation, phone assistance, and much more.
  • Machine translation, as the name indicates, translates text or speech from one language to another. Significant advances have been made in this field in only the past year.8 Machine translation has obvious implications for international relations, defense, and intelligence as well as, in our multilingual society, numerous domestic applications.
  • Computer vision is the ability to identify objects, scenes, and activities in naturally occurring images. It’s how Facebook sorts millions of users’ photos, but it can also scan medical images for indications of disease and identify criminals from surveillance footage. Soon it will allow law enforcement to quickly scan license plate numbers of vehicles stopped at red lights, identifying suspects’ cars in real time.
  • Machine learning takes place without explicit programming. By trial and error, computers learn how to learn, mining information to discover patterns in data that can help predict future events. The larger the datasets, the easier it is to accurately gauge normal or abnormal behavior. When your email program flags a message as spam, or your credit card company warns you of a potentially fraudulent use of your card, machine learning may be involved. Deep learning is a branch of machine learning involving artificial neural networks inspired by the brain’s structure and function.9
  • Robotics is the creation and use of machines to perform automated physical functions. The integration of cognitive technologies such as computer vision with sensors and other sophisticated hardware has given rise to a new generation of robots that can work alongside people and perform many tasks in unpredictable environments. Examples include drones, robots used for disaster response, and robot assistants in home health care.
  • Natural language processing refers to the complex and difficult task of organizing and understanding language in a human way. This goes far beyond interpreting search queries, or translating between Mandarin and English text. Combined with machine learning, a system can scan websites for discussions of specific topics even if the user didn’t input precise search terms. Computers can identify all the people and places mentioned in a document or extract terms and conditions from contracts. As with all AI-enabled technology, these become smarter as they consume more accurate data—and as developers integrate complementary technologies such as machine translation and natural language processing.

We’ve developed a framework that can help government agencies assess their own opportunities for deploying these technologies. It involves examining business processes, services, and programs to find where cognitive technologies may be viable, valuable, or even vital. Figure 8 summarizes this “Three Vs” framework. Government agencies can use it to screen the best opportunities for automation or cognitive technologies.

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian