The Ivory Tower Can’t Keep Ignoring Tech — from nytimes.com by Cathy O’Neil

Excerpt:

We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.

 

 

There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.

 

 

There’s one solution for the short term. We urgently need an academic institute focused on algorithmic accountability. First, it should provide a comprehensive ethical training for future engineers and data scientists at the undergraduate and graduate levels, with case studies taken from real-world algorithms that are choosing the winners from the losers. Lecturers from humanities, social sciences and philosophy departments should weigh in.

 

 

 

Somewhat related:

 

 

 

WE ARE NOT READY FOR THIS! Per Forrester Research: In US, a net loss of 7% of jobs to automation — *in 2018*!

Forrester predicts that AI-enabled automation will eliminate 9% of US jobs in 2018 — from forbes.com by Gil Press

Excerpt (emphasis DSC):

A new Forrester Research report, Predictions 2018: Automation Alters The Global Workforce, outlines 10 predictions about the impact of AI and automation on jobs, work processes and tasks, business success and failure, and software development, cybersecurity, and regulatory compliance.

We will see a surge in white-collar automation, half a million new digital workers (bots) in the US, and a shift from manual to automated IT and data management. “Companies that master automation will dominate their industries,” Forrester says. Here’s my summary of what Forrester predicts will be the impact of automation in 2018:

Automation will eliminate 9% of US jobs but will create 2% more.
In 2018, 9% of US jobs will be lost to automation, partly offset by a 2% growth in jobs supporting the “automation economy.” Specifically impacted will be back-office and administrative, sales, and call center employees. A wide range of technologies, from robotic process automation and AI to customer self-service and physical robots will impact hiring and staffing strategies as well as create a need for new skills.

 

Your next entry-level compliance staffer will be a robot.

 

From DSC:

Are we ready for a net loss of 7% of jobs in our workforce due to automation — *next year*? Last I checked, it was November 2017, and 2018 will be here before we know it.

 

***Are we ready for this?! ***

 

AS OF TODAY, can we reinvent ourselves fast enough given our current educational systems, offerings, infrastructures, and methods of learning?

 

My answer: No, we can’t. But we need to be able to — and very soon!

 

 

There are all kinds of major issues and ramifications when people lose their jobs — especially this many people and jobs! The ripple effects will be enormous and very negative unless we introduce new ways for how people can learn new things — and quickly!

That’s why I’m big on trying to establish a next generation learning platform, such as the one that I’ve been tracking and proposing out at Learning from the Living [Class] Room. It’s meant to provide societies around the globe with a powerful, next generation learning platform — one that can help people reinvent themselves quickly, cost-effectively, conveniently, & consistently! It involves providing, relevant, up-to-date streams of content that people can subscribe to — and drop at any time. It involves working in conjunction with subject matter experts who work with teams of specialists, backed up by suites of powerful technologies. It involves learning with others, at any time, from any place, at any pace. It involves more choice, more control. It involves blockchain-based technologies to feed cloud-based learner profiles and more.

But likely, bringing such a vision to fruition will require a significant amount of collaboration. In my mind, some of the organizations that should be at the table here include:

  • Some of the largest players in the tech world, such as Amazon, Google, Apple, IBM, Microsoft, and/or Facebook
  • Some of the vendors that already operate within the higher ed space — such as Salesforce.com, Ellucian, and/or Blackboard
  • Some of the most innovative institutions of higher education — including their faculty members, instructional technologists, instructional designers, members of administration, librarians, A/V specialists, and more
  • The U.S. Federal Government — for additional funding and the development of policies to make this vision a reality

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

The era of easily faked, AI-generated photos is quickly emerging — from qz.com by Dave Gershgorn

Excerpt (emphasis DSC):

Until this month, it seemed that GAN-generated images [where GAN stands for “generative adversarial networks”] that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

 

From DSC:
So AI can now generate realistic photos (i.e., image creation/manipulation). And then there’s Adobe’s VoCo Project, a sort of a Photoshop for audio manipulation plus other related technologies out there:

 

So I guess it’s like the first article concludes:

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

…and perhaps we’ll need to add, “we shouldn’t trust everything we hear either.” But how will the average person with average tools know the real deal? The concept of watermarking visuals/audio may be increasingly involved. From the ending of bbc.com article:

For its part, Adobe has talked of its customers using Voco to fix podcast and audio book recordings without having to rebook presenters or voiceover artists.

But a spokeswoman stressed that this did not mean its release was imminent.

“[It] may or may not be released as a product or product feature,” she told the BBC.

“No ship date has been announced.”

In the meantime, Adobe said it was researching ways to detect use of its software.

“Think about watermarking detection,” Mr Jin said at the demo, referring to a method used to hide identifiers in images and other media.

 

But again, we see that technology often races ahead. “Look at what we can do!”  But then the rest of society — such as developing laws, policies, questions about whether we should roll out such technologies, etc. — needs time to catch up. Morals and ethics do come into play here — as trust levels are most assuredly at stake.

Another relevant article/topic/example of this is listed below. (Though I’m not trying to say that we shouldn’t pursue self-driving cars. Rather, the topic serves as another example of technologies racing ahead while it takes a while for the rest of us/society to catch up with them).

 

 

 

From DSC:
I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.

 

How Google’s AI-Powered Job Search Will Impact Companies And Job Seekers — from forbes.com by Forbes Coaches Council

Excerpt:

In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.

As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…

 

 

5. Expect competition to increase.
Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco

 

 

10. Understanding keywords and trending topics will be essential.
Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc 

 

 

Also see:

In Unilever’s radical hiring experiment, resumes are out, algorithms are in — from foxbusiness.com by Kelsey Gee 

Excerpt:

Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.

 

 

The Future of HR: Is it Dying? — from hrtechnologist.com by Rhucha Kulkarni

Excerpt (emphasis DSC):

The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods. The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.

The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.

 

 

 

From DSC:
Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review?  Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?

And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)

Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?

At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.

Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:

 

85 Percent of Job Applicants Lie on Resumes. Here’s How to Spot a Dishonest Candidate — from inc.com by  J.T. O’Donnell
A new study shows huge increase in lies on job applications.

Excerpt (emphasis DSC):

Employer Applicant Tracking Systems Expect an Exact Match
Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirements for things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.

 

From DSC:
I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.

But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.

 

 

Also see:

Why Your Approach To Job Searching Is Failing — from forbes.com by Jeanna McGinnis

Excerpt:

Is Your Resume ATS Friendly?
Did you know that an ATS (applicant tracking system) will play a major role in whether or not your resume is selected for further review when you’re applying to opportunities through online job boards?

It’s true. When you apply to a position a company has posted online, a human usually isn’t the first to review your resume, a computer program is. Scouring your resume for keywords, terminology and phrases the hiring manager is targeting, the program will toss your resume if it can’t understand the content it’s reading. Basically, your resume doesn’t stand a chance of making it to the next level if it isn’t optimized for ATS.

To ensure your resume makes it past the evil eye of ATS, format your resume correctly for applicant tracking programs, target it to the opportunity and check for spelling errors. If you don’t, you’re wasting your time applying online.

 

Con Job: Hackers Target Millennials Looking for Work – from wsj.com by Kelsey Gee
Employment scams pose a growing threat as applications and interviews become more digital

Excerpt:

Hackers attempt to hook tens of thousands of people like Mr. Latif through job scams each year, according to U.S. Federal Trade Commission data, aiming to trick them into handing over personal or sensitive information, or to gain access to their corporate networks.

Employment fraud is nothing new, but as more companies shift to entirely-digital job application processes, Better Business Bureau director of communications Katherine Hutt said scams targeting job seekers pose a growing threat. Job candidates are now routinely invited to fill out applications, complete skill evaluations and interview—all on their smartphones, as employers seek to cast a wider net for applicants and improve the matchmaking process for entry-level hires.

Young people are a frequent target. Of the nearly 3,800 complaints the nonprofit has received from U.S. consumers on its scam report tracker in the past two years, people under 34 years old were the most susceptible to such scams, which frequently offer jobs requiring little to no prior experience, Ms. Hutt said.

 

 

Hackers are finding new ways to prey on young job seekers.

 

 

 

Robots and AI are going to make social inequality even worse, says new report — from theverge.com by
Rich people are going to find it easier to adapt to automation

Excerpt:

Most economists agree that advances in robotics and AI over the next few decades are likely to lead to significant job losses. But what’s less often considered is how these changes could also impact social mobility. A new report from UK charity Sutton Trust explains the danger, noting that unless governments take action, the next wave of automation will dramatically increase inequality within societies, further entrenching the divide between rich and poor.

The are a number of reasons for this, say the report’s authors, including the ability of richer individuals to re-train for new jobs; the rising importance of “soft skills” like communication and confidence; and the reduction in the number of jobs used as “stepping stones” into professional industries.

For example, the demand for paralegals and similar professions is likely to be reduced over the coming years as artificial intelligence is trained to handle more administrative tasks. In the UK more than 350,000 paralegals, payroll managers, and bookkeepers could lose their jobs if automated systems can do the same work.

 

Re-training for new jobs will also become a crucial skill, and it’s individuals from wealthier backgrounds that are more able to do so, says the report. This can already be seen in the disparity in terms of post-graduate education, with individuals in the UK with working class or poorer backgrounds far less likely to re-train after university.

 

 

From DSC:
I can’t emphasize this enough. There are dangerous, tumultuous times ahead if we can’t figure out ways to help ALL people within the workforce reinvent themselves quickly, cost-effectively, and conveniently. Re-skilling/up-skilling ourselves is becoming increasingly important. And I’m not just talking about highly-educated people. I’m talking about people whose jobs are going to be disappearing in the near future — especially people whose stepping stones into brighter futures are going to wake up to a very different world. A very harsh world.

That’s why I’m so passionate about helping to develop a next generation learning platform. Higher education, as an industry, has some time left to figure out their part/contribution out in this new world. But the window of time could be closing, as another window of opportunity / era could be opening up for “the next Amazon.com of higher education.”

It’s up to current, traditional institutions of higher education as to how much they want to be a part of the solution. Some of the questions each institution ought to be asking are:

  1. Given our institutions mission/vision, what landscapes should we be pulse-checking?
  2. Do we have faculty/staff/members of administration looking at those landscapes that are highly applicable to our students and to their futures? How, specifically, are the insights from those employees fed into the strategic plans of our institution?
  3. What are some possible scenarios as a result of these changing landscapes? What would our response(s) be for each scenario?
  4. Are there obstacles from us innovating and being able to respond to the shifting landscapes, especially within the workforce?
  5. How do we remove those obstacles?
  6. On a scale of 0 (we don’t innovate at all) to 10 (highly innovative), where is our culture today? Where do we hope to be 5 years from now? How do we get there?

…and there are many other questions no doubt. But I don’t think we’re looking into the future nearly enough to see the massive needs — and real issues — ahead of us.

 

 

The report, which was carried out by the Boston Consulting Group and published this Wednesday [7/12/17], looks specifically at the UK, where it says some 15 million jobs are at risk of automation. But the Sutton Trust says its findings are also relevant to other developed nations, particularly the US, where social mobility is a major problem.

 

 

 

 

AI is making it extremely easy for students to cheat — from wired.com by Pippa Biddle

Excerpt (emphasis DSC):

For years, students have turned to CliffsNotes for speedy reads of books, SparkNotes to whip up talking points for class discussions, and Wikipedia to pad their papers with historical tidbits. But today’s students have smarter tools at their disposal—namely, Wolfram|Alpha, a program that uses artificial intelligence to perfectly and untraceably solve equations. Wolfram|Alpha uses natural language processing technology, part of the AI family, to provide students with an academic shortcut that is faster than a tutor, more reliable than copying off of friends, and much easier than figuring out a solution yourself.

 

Use of Wolfram|Alpha is difficult to trace, and in the hands of ambitious students, its perfect solutions are having unexpected consequences.

 

 

 

 
 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

AI will make forging anything entirely too easy — from wired.com by Greg Allen

Excerpt:

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

 

Also referenced in the above article:

 

 

 

 

Web host agrees to pay $1m after it’s hit by Linux-targeting ransomware — from arstechnica.com by Dan Goodin
Windfall payment by poorly secured host is likely to inspire new ransomware attacks.

Excerpt (emphasis above and below by DSC):

A Web-hosting service recently agreed to pay $1 million to a ransomware operation that encrypted data stored on 153 Linux servers and 3,400 customer websites, the company said recently.

The South Korean Web host, Nayana, said in a blog post published last week that initial ransom demands were for five billion won worth of Bitcoin, which is roughly $4.4 million. Company negotiators later managed to get the fee lowered to 1.8 billion won and ultimately landed a further reduction to 1.2 billion won, or just over $1 million. An update posted Saturday said Nayana engineers were in the process of recovering the data. The post cautioned that that the recovery was difficult and would take time.

 

 

 

From DSC:
This type of technology could be good, or it could be bad…or, like many technologies, it could be both — depends upon how it’s used. The resources below mention some positive applications, but also some troubling applications.


 

Lyrebird claims it can recreate any voice using just one minute of sample audio — from theverge.com by James Vincent
The results aren’t 100 percent convincing, but it’s a sign of things to come

Excerpt:

Artificial intelligence is making human speech as malleable and replicable as pixels. Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

 

 

 

 

 

Also see:

 

Imitating people’s speech patterns precisely could bring trouble — from economist.com by
You took the words right out of my mouth

Excerpt:

UTTER 160 or so French or English phrases into a phone app developed by CandyVoice, a new Parisian company, and the app’s software will reassemble tiny slices of those sounds to enunciate, in a plausible simulacrum of your own dulcet tones, whatever typed words it is subsequently fed. In effect, the app has cloned your voice. The result still sounds a little synthetic but CandyVoice’s boss, Jean-Luc Crébouw, reckons advances in the firm’s algorithms will render it increasingly natural. Similar software for English and four widely spoken Indian languages, developed under the name of Festvox, by Carnegie Mellon University’s Language Technologies Institute, is also available. And Baidu, a Chinese internet giant, says it has software that needs only 50 sentences to simulate a person’s voice.

Until recently, voice cloning—or voice banking, as it was then known—was a bespoke industry which served those at risk of losing the power of speech to cancer or surgery.

More troubling, any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer.

 

 

Per Candyvoice.com:

Expert in digital voice processing, CandyVoice offers software to facilitate and improve vocal communication between people and communicating objects. With applications in:

Health
Customize your devices of augmentative and alternative vocal communication by integrating in them your users’ personal vocal model

Robots & Communicating objects
Improve communication with robots through voice conversion, customized TTS, and noise filtering

Video games
Enhance the gaming experience by integrating vocal conversion of character’s voice in real time, and the TTS customizing

 

 

Also related:

 

 

From DSC:
Given this type of technology, what’s to keep someone from cloning a voice, putting together whatever you wanted that person to say, and then making it appear that Alexa recorded that other person’s voice?

 

 

 

 

Making sure the machines don’t take over — from raconteur.net by Mark Frary
Preparing economic players for the impact of artificial intelligence is a work in progress which requires careful handling

 

From DSC:
This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.

Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.

 

Also see:

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 
© 2025 | Daniel Christian