Google already knows what you did next summer — from bloomberg.com by Nikki Ekstein
The company’s new travel tools put the focus on artificial intelligence, helping to best predict what you’re going to love—no matter where you go.

Excerpt:

But it’s not a hotel or travel agency that’s cracking the golden acorn. It’s Google. The tech titan is leveraging its immense artificial intelligence and machine-learning capabilities to boost its travel offerings, which currently cover everything from flight and hotel search to activity recommendations, destination guides, and mapping services

The proof is in the pudding. Over the past few months, Google has quietly launched an array of travel-focused features and updates that highlight just how intuitive its AI technology has become. Put them all together and you’ll have enough reason to believe your next travel agent might just be a Google-powered bot.

 

Microsoft to tackle AI skills shortage with two new training programs — from zdnet.com by Nick Heath
The tech giant reveals two new training programs to boost AI-related skills in business and in universities.

Excerpts:

The first of the two programs, Microsoft AI Academy, will run face-to-face and online training sessions for business and public-sector leaders, IT professionals, developers, and startups.

Microsoft is also funding a program to help train the next generation of data scientists and machine-learning engineers. Professor Chris Bishop, director of Microsoft’s Research Lab in Cambridge, said the Microsoft Research-Cambridge University Machine Learning Initiative is designed to address the stream of leading machine-learning researchers moving from universities to the private sector.

 

 

 

Should self-driving cars have ethics? — from npr.org by Laurel Wamsley

Excerpt:

In the not-too-distant future, fully autonomous vehicles will drive our streets. These cars will need to make split-second decisions to avoid endangering human lives — both inside and outside of the vehicles.

To determine attitudes toward these decisions a group of researchers created a variation on the classic philosophical exercise known as “the Trolley problem.” They posed a series of moral dilemmas involving a self-driving car with brakes that suddenly give out…

 

 

 

Reflections on “Are ‘smart’ classrooms the future?” [Johnston]

Are ‘smart’ classrooms the future? — from campustechnology.com by Julie Johnston
Indiana University explores that question by bringing together tech partners and university leaders to share ideas on how to design classrooms that make better use of faculty and student time.

Excerpt:

To achieve these goals, we are investigating smart solutions that will:

  • Untether instructors from the room’s podium, allowing them control from anywhere in the room;
  • Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
  • Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
  • Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
  • Deliver these features through a simple, user-friendly and reliable room/technology interface.

Activities included collaborative brainstorming focusing on these questions:

  • What else can we do to create the classroom of the future?
  • What current technology exists to solve these problems?
  • What could be developed that doesn’t yet exist?
  • What’s next?

 

 

 

From DSC:
Though many peoples’ — including faculty members’ — eyes gloss over when we start talking about learning spaces and smart classrooms, it’s still an important topic. Personally, I’d rather be learning in an engaging, exciting learning environment that’s outfitted with a variety of tools (physically as well as digitally and virtually-based) that make sense for that community of learners. Also, faculty members have very limited time to get across campus and into the classroom and get things setup…the more things that can be automated in those setup situations the better!

I’ve long posted items re: machine-to-machine communications, voice recognition/voice-enabled interfaces, artificial intelligence, bots, algorithms, a variety of vendors and their products including Amazon’s Alexa / Apple’s Siri / Microsoft’s Cortana / and Google’s Home or Google Assistant, learning spaces, and smart classrooms, as I do think those things are components of our future learning ecosystems.

 

 

 

logo.

Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData

The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.

The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.

Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”

Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.

Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”

In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.

Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”

Information based on GlobalData’s report: Smart Speakers – Thematic Research

 

 

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

Reflections on “Inside Amazon’s artificial intelligence flywheel” [Levy]

Inside Amazon’s artificial intelligence flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt (emphasis DSC):

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

 

 

From DSC:
When will we begin to see more mainstream recommendation engines for learning-based materials? With the demand for people to reinvent themselves, such a next generation learning platform can’t come soon enough!

  • Turning over control to learners to create/enhance their own web-based learner profiles; and allowing people to say who can access their learning profiles.
  • AI-based recommendation engines to help people identify curated, effective digital playlists for what they want to learn about.
  • Voice-driven interfaces.
  • Matching employees to employers.
  • Matching one’s learning preferences (not styles) with the content being presented as one piece of a personalized learning experience.
  • From cradle to grave. Lifelong learning.
  • Multimedia-based, interactive content.
  • Asynchronously and synchronously connecting with others learning about the same content.
  • Online-based tutoring/assistance; remote assistance.
  • Reinvent. Staying relevant. Surviving.
  • Competency-based learning.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

 

 

We’re about to embark on a period in American history where career reinvention will be critical, perhaps more so than it’s ever been before. In the next decade, as many as 50 million American workers—a third of the total—will need to change careers, according to McKinsey Global Institute. Automation, in the form of AI (artificial intelligence) and RPA (robotic process automation), is the primary driver. McKinsey observes: “There are few precedents in which societies have successfully retrained such large numbers of people.”

Bill Triant and Ryan Craig

 

 

 

Also relevant/see:

Online education’s expansion continues in higher ed with a focus on tech skills — from educationdive.com by James Paterson

Dive Brief:

  • Online learning continues to expand in higher ed with the addition of several online master’s degrees and a new for-profit college that offers a hybrid of vocational training and liberal arts curriculum online.
  • Inside Higher Ed reported the nonprofit learning provider edX is offering nine master’s degrees through five U.S. universities — the Georgia Institute of Technology, the University of Texas at Austin, Indiana University, Arizona State University and the University of California, San Diego. The programs include cybersecurity, data science, analytics, computer science and marketing, and they cost from around $10,000 to $22,000. Most offer stackable certificates, helping students who change their educational trajectory.
  • Former Harvard University Dean of Social Science Stephen Kosslyn, meanwhile, will open Foundry College in January. The for-profit, two-year program targets adult learners who want to upskill, and it includes training in soft skills such as critical thinking and problem solving. Students will pay about $1,000 per course, though the college is waiving tuition for its first cohort.

 

 

 

 

In the 2030 and beyond world, employers will no longer be a separate entity from the education establishment. Pressures from both the supply and demand side are so large that employers and learners will end up, by default, co-designing new learning experiences, where all learning counts.

 

OBJECTIVES FOR CONVENINGS

  • Identify the skills everyone will need to navigate the changing relationship between machine intelligence and people over the next 10-12 years.
  • Develop implications for work, workers, students, working learners, employers, and policymakers.
  • Identify a preliminary set of actions that need to be taken now to best prepare for the changing work + learn ecosystem.

Three key questions guided the discussions:

  1. What are the LEAST and MOST essential skills needed for the future?
  2. Where and how will tomorrow’s workers and learners acquire the skills they really need?
  3. Who is accountable for making sure individuals can thrive in this new economy?

This report summarizes the experts’ views on what skills will likely be needed to navigate the work + learn ecosystem over the next 10–15 years—and their suggested steps for better serving the nation’s future needs.

 

In a new world of work, driven especially by AI, institutionally-sanctioned curricula could give way to AI-personalized learning. This would drastically change the nature of existing social contracts between employers and employees, teachers and students, and governments and citizens. Traditional social contracts would need to be renegotiated or revamped entirely. In the process, institutional assessment and evaluation could well shift from top-down to new bottom-up tools and processes for developing capacities, valuing skills, and managing performance through new kinds of reputation or accomplishment scores.

 

In October 2017, Chris Wanstrath, CEO of Github, the foremost code-sharing and social networking resource for programmers today, made a bold statement: “The future of coding is no coding at all.” He believes that the writing of code will be automated in the near future, leaving humans to focus on “higher-level strategy and design of software.” Many of the experts at the convenings agreed. Even creating the AI systems of tomorrow, they asserted, will likely require less human coding than is needed today, with graphic interfaces turning AI programming into a drag-and-drop operation.

Digital fluency does not mean knowing coding languages. Experts at both convenings contended that effectively “befriending the machine” will be less about teaching people to code and more about being able to empathize with AIs and machines, understanding how they “see the world” and “think” and “make decisions.” Machines will create languages to talk to one another.

Here’s a list of many skills the experts do not expect to see much of—if at all—in the future:

  • Coding. Systems will be self-programming.
  • Building AI systems. Graphic interfaces will turn AI programming into drag-and-drop operations.
  • Calendaring, scheduling, and organizing. There won’t be need for email triage.
  • Planning and even decision-making. AI assistants will pick this up.
  • Creating more personalized curricula. Learners may design more of their own personalized learning adventure.
  • Writing and reviewing resumes. Digital portfolios, personal branding, and performance reputation will replace resumes.
  • Language translation and localization. This will happen in real time using translator apps.
  • Legal research and writing. Many of our legal systems will be automated.
  • Validation skills. Machines will check people’s work to validate their skills.
  • Driving. Driverless vehicles will replace the need to learn how to drive.

Here’s a list of the most essential skills needed for the future:

  • Quantitative and algorithmic thinking.  
  • Managing reputation.  
  • Storytelling and interpretive skills.  
  • First principles thinking.  
  • Communicating with machines as machines.  
  • Augmenting high-skilled physical tasks with AI.
  • Optimization and debugging frame of mind.
  • Creativity and growth mindset.
  • Adaptability.
  • Emotional intelligence.
  • Truth seeking.
  • Cybersecurity.

 

The rise of machine intelligence is just one of the many powerful social, technological, economic, environmental, and political forces that are rapidly and disruptively changing the way everyone will work and learn in the future. Because this largely tech-driven force is so interconnected with other drivers of change, it is nearly impossible to understand the impact of intelligent agents on how we will work and learn without also imagining the ways in which these new tools will reshape how we live.

 

 

 

AI Now Law and Policy Reading List — from medium.com by the AI Now Institute

Excerpt:

Data-driven technologies are widely used in society to make decisions that affect many critical aspects of our lives, from health, education, employment, and criminal justice to economic, social and political norms. Their varied applications, uses, and consequences raise a number of unique and complex legal and policy concerns. As a result, it can be hard to figure out not only how these systems work but what to do about them.

As a starting point, AI Now offers this Law and Policy Reading List tailored for those interested in learning about key concepts, debates, and leading analysis on law and policy issues related to artificial intelligence and other emerging data-driven technologies.

 

New game lets players train AI to spot legal issues — from abajournal.com by Jason Tashea

Excerpt:

Got a free minute? There’s a new game that will help train an artificial intelligence model to spot legal issues and help close the access-to-justice gap.

Called Learned Hands—yes, it’s a pun—the game takes 75,000 legal questions posted on Reddit dealing with family, consumer, criminal and other legal issues and asks the user to determine what the issue is.

While conjuring up nightmares of the first-year in law school for many lawyers, David Colarusso says it’s for a good cause.

“It’s an opportunity for attorneys to take their downtime to train machine learning algorithms to help access-to-justice issues,” says Colarusso, director of Suffolk University Law School’s Legal Innovation and Technology (LIT) Lab and partner on this project with the Stanford Legal Design Lab.

 

From learnedhands.law.stanford.edu/legalIssues

When you play the game, you’ll be spotting if different legal issues are present in people’s stories. Some of these issues will be high level categories, and others will be more specific issues.

Here are the high level categories:

 

 

Your next doctor’s appointment might be with an AI — from technologyreview.com by Douglas Heaven
A new wave of chatbots are replacing physicians and providing frontline medical advice—but are they as good as the real thing?

Excerpt:

The idea is to make seeking advice about a medical condition as simple as Googling your symptoms, but with many more benefits. Unlike self-diagnosis online, these apps lead you through a clinical-grade triage process—they’ll tell you if your symptoms need urgent attention or if you can treat yourself with bed rest and ibuprofen instead. The tech is built on a grab bag of AI techniques: language processing to allow users to describe their symptoms in a casual way, expert systems to mine huge medical databases, machine learning to string together correlations between symptom and condition.

Babylon Health, a London-based digital-first health-care provider, has a mission statement it likes to share in a big, bold font: to put an accessible and affordable health service in the hands of every person on earth. The best way to do this, says the company’s founder, Ali Parsa, is to stop people from needing to see a doctor.

Not everyone is happy about all this. For a start, there are safety concerns. Parsa compares what Babylon does with your medical data to what Facebook does with your social activities—amassing information, building links, drawing on what it knows about you to prompt some action. Suggesting you make a new friend won’t kill you if it’s a bad recommendation, but the stakes are a lot higher for a medical app.

 

 

Also see:

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

In a world where machines may have an IQ of 50,000 and the Internet of Things may encompass 500 billion devices, what will happen with those important social contracts, values and ethics that underpin crucial issues such as privacy, anonymity and free will?

 

 

My book identifies what I call the “Megashifts”. They are changing society at warp speed, and your organisations are in the eye of the storm: digitization, mobilisation and screenification, automation, intelligisation, disintermediation, virtualisation and robotisation, to name the most prominent. Megashifts are not simply trends or paradigm shifts, they are complete game changers transforming multiple domains simultaneously.

 

 

If the question is no longer about if technology can do something, but why…who decides this?

Gerd Leonhard

 

 

From DSC:
Though this letter was written 2 years ago back in October of 2016, the messages, reflections, and questions that Gerd puts on the table are very much still relevant today.  The leaders of these powerful companies have enormous power — power to do good, or to do evil. Power to help or power to hurt. Power to be a positive force for societies throughout the globe and to help create dreams, or power to create dystopian societies while developing a future filled with nightmares. The state of the human heart is extremely key here — though many will hate me saying that. But it’s true. At the end of the day, we need to very much care about — and be extremely aware of — the characters and values of the leaders of these powerful companies. 

 

 

Also relevant/see:

Spray-on antennas will revolutionize the Internet of Things — from networkworld.com by Patrick Nelson
Researchers at Drexel University have developed a method to spray on antennas that outperform traditional metal antennas, opening the door to faster and easier IoT deployments.

 From DSC:
Again, it’s not too hard to imagine in this arena that technologies can be used for good or for ill.

 

 

Evaluating the impact of artificial intelligence on human rights — from today.law.harvard.edu by Carolyn Schmitt
Report from Berkman Klein Center for Internet & Society provides new foundational framework for considering risks and benefits of AI on human rights

Excerpt:

From using artificial intelligence (AI) to determine credit scores to using AI to determine whether a defendant or criminal may offend again, AI-based tools are increasingly being used by people and organizations in positions of authority to make important, often life-altering decisions. But how do these instances impact human rights, such as the right to equality before the law, and the right to an education?

A new report from the Berkman Klein Center for Internet & Society (BKC) addresses this issue and weighs the positive and negative impacts of AI on human rights through six “use cases” of algorithmic decision-making systems, including criminal justice risk assessments and credit scores. Whereas many other reports and studies have focused on ethical issues of AI, the BKC report is one of the first efforts to analyze the impacts of AI through a human rights lens, and proposes a new framework for thinking about the impact of AI on human rights. The report was funded, in part, by the Digital Inclusion Lab at Global Affairs Canada.

“One of the things I liked a lot about this project and about a lot of the work we’re doing [in the Algorithms and Justice track of the Ethics and Governance of AI Initiative] is that it’s extremely current and tangible. There are a lot of far-off science fiction scenarios that we’re trying to think about, but there’s also stuff happening right now,” says Professor Christopher Bavitz, the WilmerHale Clinical Professor of Law, Managing Director of the Cyberlaw Clinic at BKC, and senior author on the report. Bavitz also leads the Algorithms and Justice track of the BKC project on the Ethics and Governance of AI Initiative, which developed this report.

 

 

Also see:

  • Morality in the Machines — from today.law.harvard.edu by Erick Trickey
    Researchers at Harvard’s Berkman Klein Center for Internet & Society are collaborating with MIT scholars to study driverless cars, social media feeds, and criminal justice algorithms, to make sure openness and ethics inform artificial intelligence.

 

 

 

Microsoft's conference room of the future

 

From DSC:
Microsoft’s conference room of the future “listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”

This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?

Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?

Hmmm…time will tell.

 

 


 

Also see this article out at Forbes.com entitled, “There’s Nothing Artificial About How AI Is Changing The Workplace.” 

Here is an excerpt:

The New Meeting Scribe: Artificial Intelligence

As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).

Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.

 

 

To higher ed: When the race track is going 180mph, you can’t walk or jog onto the track. [Christian]

From DSC:
When the race track is going 180mph, you can’t walk or jog onto the track.  What do I mean by that? 

Consider this quote from an article that Jeanne Meister wrote out at Forbes entitled, “The Future of Work: Three New HR Roles in the Age of Artificial Intelligence:”*

This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.

Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more. 

 

From Robert Half’s 2019 Technology Salary Guide 

 

 

How many of us have those kinds of skills? Did we get that training in the community colleges, colleges, and universities that we went to? Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace. 

This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.

This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies. 

 

 

The below graphic is from:
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages

 

 

 


 

* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:

  1. IBM: Vice President, Data, AI & Offering Strategy, HR
  2. Kraft Heinz Senior Vice President Global HR, Performance and IT
  3. SunTrust Senior Vice President Employee Wellbeing & Benefits

What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.

 


 

An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy 

Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

 


 

 
© 2024 | Daniel Christian