From DSC:
I know Quentin Schultze from our years working together at Calvin College, in Grand Rapids, Michigan (USA). I have come to greatly appreciate Quin as a person of faith, as an innovative/entrepreneurial professor, as a mentor to his former students, and as an excellent communicator. 

Quin has written a very concise, wisdom-packed book that I would like to recommend to those people who are seeking to be better communicators, leaders, and servants. But I would especially like to recommend this book to the leadership at Google, Amazon, Apple, Microsoft, IBM, Facebook, Nvidia, the major companies developing robots, and other high-tech companies. Why do I list these organizations? Because given the exponential pace of technological change, these organizations — and their leaders — have an enormous responsibility to make sure that the technologies that they are developing result in positive changes for societies throughout the globe. They need wisdom, especially as they are working on emerging technologies such as Artificial Intelligence (AI), personal assistants and bots, algorithms, robotics, the Internet of Things, big data, blockchain and more. These technologies continue to exert an increasingly powerful influence on numerous societies throughout the globe today. And we haven’t seen anything yet! Just because we can develop and implement something, doesn’t mean that we should. Again, we need wisdom here.

But as Quin states, it’s not just about knowledge, the mind and our thoughts. It’s about our hearts as well. That is, we need leaders who care about others, who can listen well to others, who can serve others well while avoiding gimmicks, embracing diversity, building trust, fostering compromise and developing/exhibiting many of the other qualities that Quin writes about in his book. Our societies desperately need leaders who care about others and who seek to serve others well.

I highly recommend you pick up a copy of Quin’s book. There are few people who can communicate as much in as few words as Quin can. In fact, I wish that more writing on the web and more articles/research coming out of academia would be as concisely and powerfully written as Quin’s book, Communicate Like a True Leader: 30 Days of Life-Changing Wisdom.

 

 

To lead is to accept responsibility and act responsibly.
Quentin Schultze

 

 

 

 

“An algorithm designed badly can go on for a long time, silently wreaking havoc.”

— Cathy O’Neil

 

 

 

Cathy O’Neil: The era of blind faith in big data must end | TED Talk | TED.com

Description:
Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.

 

 

 



Addendum:

As AI Gets Smarter, Scholars Raise Ethics Questions — from by by Chris Hayhurst
Interdisciplinary artificial intelligence research fosters philosophical discussions.

Excerpt (emphasis DSC):

David Danks, head of the philosophy department at Carnegie Mellon University, has a message for his colleagues in the CMU robotics department: As they invent and develop the technologies of the future, he encourages them to consider the human dimensions of their work.

His concern? All too often, Danks says, technological innovation ignores the human need for ethical guidelines and moral standards. That’s especially true when it comes to innovations such as artificial intelligence and automation, he says.

“It’s, ‘Look at this cool technology that we’ve got. How can you stand in the way of something like this?’” says Danks. “We should be saying, ‘Wait a second. How is this technology affecting people?’”

As an example, Danks points to AI-powered medical diagnostic systems. Such tools have great potential to parse data for better decision-making, but they lack the social interaction between patient and physician that can be so important to those decisions. It’s one thing to have a technology that can diagnose a patient with strep throat and recommend a certain antibiotic, but what about a patient with cancer who happens to be a professional violinist?

“For most people, you’d just give them the most effective drug,” says Danks. “But what do you do if one of the side effects of that medication is hand tremors? I see a lot of possibilities with AI, but it’s also important to recognize the challenges.”



 

 

From DSC:
I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.

 

How Google’s AI-Powered Job Search Will Impact Companies And Job Seekers — from forbes.com by Forbes Coaches Council

Excerpt:

In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.

As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…

 

 

5. Expect competition to increase.
Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco

 

 

10. Understanding keywords and trending topics will be essential.
Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc 

 

 

Also see:

In Unilever’s radical hiring experiment, resumes are out, algorithms are in — from foxbusiness.com by Kelsey Gee 

Excerpt:

Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.

 

 

The Future of HR: Is it Dying? — from hrtechnologist.com by Rhucha Kulkarni

Excerpt (emphasis DSC):

The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods. The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.

The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.

 

 

 

From DSC:
Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review?  Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?

And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)

Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?

At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.

Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:

 

85 Percent of Job Applicants Lie on Resumes. Here’s How to Spot a Dishonest Candidate — from inc.com by  J.T. O’Donnell
A new study shows huge increase in lies on job applications.

Excerpt (emphasis DSC):

Employer Applicant Tracking Systems Expect an Exact Match
Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirements for things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.

 

From DSC:
I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.

But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.

 

 

Also see:

Why Your Approach To Job Searching Is Failing — from forbes.com by Jeanna McGinnis

Excerpt:

Is Your Resume ATS Friendly?
Did you know that an ATS (applicant tracking system) will play a major role in whether or not your resume is selected for further review when you’re applying to opportunities through online job boards?

It’s true. When you apply to a position a company has posted online, a human usually isn’t the first to review your resume, a computer program is. Scouring your resume for keywords, terminology and phrases the hiring manager is targeting, the program will toss your resume if it can’t understand the content it’s reading. Basically, your resume doesn’t stand a chance of making it to the next level if it isn’t optimized for ATS.

To ensure your resume makes it past the evil eye of ATS, format your resume correctly for applicant tracking programs, target it to the opportunity and check for spelling errors. If you don’t, you’re wasting your time applying online.

 

Con Job: Hackers Target Millennials Looking for Work – from wsj.com by Kelsey Gee
Employment scams pose a growing threat as applications and interviews become more digital

Excerpt:

Hackers attempt to hook tens of thousands of people like Mr. Latif through job scams each year, according to U.S. Federal Trade Commission data, aiming to trick them into handing over personal or sensitive information, or to gain access to their corporate networks.

Employment fraud is nothing new, but as more companies shift to entirely-digital job application processes, Better Business Bureau director of communications Katherine Hutt said scams targeting job seekers pose a growing threat. Job candidates are now routinely invited to fill out applications, complete skill evaluations and interview—all on their smartphones, as employers seek to cast a wider net for applicants and improve the matchmaking process for entry-level hires.

Young people are a frequent target. Of the nearly 3,800 complaints the nonprofit has received from U.S. consumers on its scam report tracker in the past two years, people under 34 years old were the most susceptible to such scams, which frequently offer jobs requiring little to no prior experience, Ms. Hutt said.

 

 

Hackers are finding new ways to prey on young job seekers.

 

 

 

A leading Silicon Valley engineer explains why every tech worker needs a humanities education — from qz.com by Tracy Chou

Excerpts:

I was no longer operating in a world circumscribed by lesson plans, problem sets and programming assignments, and intended course outcomes. I also wasn’t coding to specs, because there were no specs. As my teammates and I were building the product, we were also simultaneously defining what it should be, whom it would serve, what behaviors we wanted to incentivize amongst our users, what kind of community it would become, and what kind of value we hoped to create in the world.

I still loved immersing myself in code and falling into a state of flow—those hours-long intensive coding sessions where I could put everything else aside and focus solely on the engineering tasks at hand. But I also came to realize that such disengagement from reality and societal context could only be temporary.

At Quora, and later at Pinterest, I also worked on the algorithms powering their respective homefeeds: the streams of content presented to users upon initial login, the default views we pushed to users. It seems simple enough to want to show users “good” content when they open up an app. But what makes for good content? Is the goal to help users to discover new ideas and expand their intellectual and creative horizons? To show them exactly the sort of content that they know they already like? Or, most easily measurable, to show them the content they’re most likely to click on and share, and that will make them spend the most time on the service?

 

Ruefully—and with some embarrassment at my younger self’s condescending attitude toward the humanities—I now wish that I had strived for a proper liberal arts education. That I’d learned how to think critically about the world we live in and how to engage with it. That I’d absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish I’d even realized that these were worthwhile thoughts to fill my mind with—that all of my engineering work would be contextualized by such subjects.

It worries me that so many of the builders of technology today are people like me; people haven’t spent anywhere near enough time thinking about these larger questions of what it is that we are building, and what the implications are for the world.

 

 


Also see:


 

Why We Need the Liberal Arts in Technology’s Age of Distraction — from time.com by Tim Bajarin

Excerpt:

In a recent Harvard Business Review piece titled “Liberal Arts in the Data Age,” author JM Olejarz writes about the importance of reconnecting a lateral, liberal arts mindset with the sort of rote engineering approach that can lead to myopic creativity. Today’s engineers have been so focused on creating new technologies that their short term goals risk obscuring unintended longterm outcomes. While a few companies, say Intel, are forward-thinking enough to include ethics professionals on staff, they remain exceptions. At this point all tech companies serious about ethical grounding need to be hiring folks with backgrounds in areas like anthropology, psychology and philosophy.

 

 

 

 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

AI will make forging anything entirely too easy — from wired.com by Greg Allen

Excerpt:

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

 

Also referenced in the above article:

 

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

Tech giants grapple with the ethical concerns raised by the AI boom — from technologyreview.com by Tom Simonite
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

Excerpt:

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations.

“We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 
© 2017 | Daniel Christian