From DSC:
I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.

 

How Google’s AI-Powered Job Search Will Impact Companies And Job Seekers — from forbes.com by Forbes Coaches Council

Excerpt:

In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.

As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…

 

 

5. Expect competition to increase.
Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco

 

 

10. Understanding keywords and trending topics will be essential.
Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc 

 

 

Also see:

In Unilever’s radical hiring experiment, resumes are out, algorithms are in — from foxbusiness.com by Kelsey Gee 

Excerpt:

Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.

 

 

The Future of HR: Is it Dying? — from hrtechnologist.com by Rhucha Kulkarni

Excerpt (emphasis DSC):

The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods. The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.

The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.

 

 

 

From DSC:
Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review?  Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?

And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)

Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?

At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.

Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:

 

85 Percent of Job Applicants Lie on Resumes. Here’s How to Spot a Dishonest Candidate — from inc.com by  J.T. O’Donnell
A new study shows huge increase in lies on job applications.

Excerpt (emphasis DSC):

Employer Applicant Tracking Systems Expect an Exact Match
Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirements for things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.

 

From DSC:
I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.

But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.

 

 

Also see:

Why Your Approach To Job Searching Is Failing — from forbes.com by Jeanna McGinnis

Excerpt:

Is Your Resume ATS Friendly?
Did you know that an ATS (applicant tracking system) will play a major role in whether or not your resume is selected for further review when you’re applying to opportunities through online job boards?

It’s true. When you apply to a position a company has posted online, a human usually isn’t the first to review your resume, a computer program is. Scouring your resume for keywords, terminology and phrases the hiring manager is targeting, the program will toss your resume if it can’t understand the content it’s reading. Basically, your resume doesn’t stand a chance of making it to the next level if it isn’t optimized for ATS.

To ensure your resume makes it past the evil eye of ATS, format your resume correctly for applicant tracking programs, target it to the opportunity and check for spelling errors. If you don’t, you’re wasting your time applying online.

 

Con Job: Hackers Target Millennials Looking for Work – from wsj.com by Kelsey Gee
Employment scams pose a growing threat as applications and interviews become more digital

Excerpt:

Hackers attempt to hook tens of thousands of people like Mr. Latif through job scams each year, according to U.S. Federal Trade Commission data, aiming to trick them into handing over personal or sensitive information, or to gain access to their corporate networks.

Employment fraud is nothing new, but as more companies shift to entirely-digital job application processes, Better Business Bureau director of communications Katherine Hutt said scams targeting job seekers pose a growing threat. Job candidates are now routinely invited to fill out applications, complete skill evaluations and interview—all on their smartphones, as employers seek to cast a wider net for applicants and improve the matchmaking process for entry-level hires.

Young people are a frequent target. Of the nearly 3,800 complaints the nonprofit has received from U.S. consumers on its scam report tracker in the past two years, people under 34 years old were the most susceptible to such scams, which frequently offer jobs requiring little to no prior experience, Ms. Hutt said.

 

 

Hackers are finding new ways to prey on young job seekers.

 

 

 

A leading Silicon Valley engineer explains why every tech worker needs a humanities education — from qz.com by Tracy Chou

Excerpts:

I was no longer operating in a world circumscribed by lesson plans, problem sets and programming assignments, and intended course outcomes. I also wasn’t coding to specs, because there were no specs. As my teammates and I were building the product, we were also simultaneously defining what it should be, whom it would serve, what behaviors we wanted to incentivize amongst our users, what kind of community it would become, and what kind of value we hoped to create in the world.

I still loved immersing myself in code and falling into a state of flow—those hours-long intensive coding sessions where I could put everything else aside and focus solely on the engineering tasks at hand. But I also came to realize that such disengagement from reality and societal context could only be temporary.

At Quora, and later at Pinterest, I also worked on the algorithms powering their respective homefeeds: the streams of content presented to users upon initial login, the default views we pushed to users. It seems simple enough to want to show users “good” content when they open up an app. But what makes for good content? Is the goal to help users to discover new ideas and expand their intellectual and creative horizons? To show them exactly the sort of content that they know they already like? Or, most easily measurable, to show them the content they’re most likely to click on and share, and that will make them spend the most time on the service?

 

Ruefully—and with some embarrassment at my younger self’s condescending attitude toward the humanities—I now wish that I had strived for a proper liberal arts education. That I’d learned how to think critically about the world we live in and how to engage with it. That I’d absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish I’d even realized that these were worthwhile thoughts to fill my mind with—that all of my engineering work would be contextualized by such subjects.

It worries me that so many of the builders of technology today are people like me; people haven’t spent anywhere near enough time thinking about these larger questions of what it is that we are building, and what the implications are for the world.

 

 


Also see:


 

Why We Need the Liberal Arts in Technology’s Age of Distraction — from time.com by Tim Bajarin

Excerpt:

In a recent Harvard Business Review piece titled “Liberal Arts in the Data Age,” author JM Olejarz writes about the importance of reconnecting a lateral, liberal arts mindset with the sort of rote engineering approach that can lead to myopic creativity. Today’s engineers have been so focused on creating new technologies that their short term goals risk obscuring unintended longterm outcomes. While a few companies, say Intel, are forward-thinking enough to include ethics professionals on staff, they remain exceptions. At this point all tech companies serious about ethical grounding need to be hiring folks with backgrounds in areas like anthropology, psychology and philosophy.

 

 

 

 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

AI will make forging anything entirely too easy — from wired.com by Greg Allen

Excerpt:

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

 

Also referenced in the above article:

 

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

Tech giants grapple with the ethical concerns raised by the AI boom — from technologyreview.com by Tom Simonite
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

Excerpt:

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations.

“We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

Code-Dependent: Pros and Cons of the Algorithm Age — from pewinternet.org by Lee Rainie and Janna Anderson
Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms. Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Algorithms are often elegant and incredibly useful tools used to accomplish tasks. They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences. Recent news items tie to these concerns…

 

The use of algorithms is spreading as massive amounts of data are being created, captured and analyzed by businesses and governments. Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace.

 

 

 

 

 

 

 

A world without work — by Derek Thompson; The Atlantic — from July 2015

Excerpts:

Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.

For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War  II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.

Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.

“Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed”…

“The cultural breakdown matters even more than the economic breakdown.”

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen.

What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both”…

Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury.

Most people do need to achieve things through, yes, work to feel a lasting sense of purpose.

When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit.

What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.

“I can’t stress this enough: this isn’t just about economics; it’s psychological”…

 

 

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

 

 

From DSC:
Though I’m not saying Thompson is necessarily asserting this in his article, I don’t see a world without work as a dream. In fact, as the quote immediately before this paragraph alludes to, I think that most people would not like a life that is devoid of all work. I think work is where we can serve others, find purpose and meaning for our lives, seek to be instruments of making the world a better place, and attempt to design/create something that’s excellent.  We may miss the mark often (I know I do), but we keep trying.

 

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

The Periodic Table of AI — from ai.xprize.org by Kris Hammond

Excerpts:

This is an invitation to collaborate.  In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.

Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us.  We argue constantly about what is and is not AI.  We certainly cannot agree on how to test for it.  We have difficultly deciding what technologies should be included within it.  And we struggle with how to evaluate it.

Even so, we are looking at a future in which intelligent technologies are becoming commonplace.

With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence.  By stepping away from how these elements are implemented, we can talk about what they are and their roles within larger systems.

 

 

Also see this article, which contains the graphic below:

 

 

 

From DSC:
These graphics are helpful to me, as they increase my understanding of some of the complexities involved within the realm of artificial intelligence.

 

 

 


Also relevant/see:

 

 

 

Artificial Intelligence Ethics, Jobs & Trust – UK Government Sets Out AI future — from cbronline.com by Ellie Burns

Excerpt:

UK government is driving the artificial intelligence agenda, pinpointing it as a future technology driving the fourth revolution and billing its importance on par with the steam engine.

The report on Artificial Intelligence by the Government Office for Science follows the recent House of Commons Committee report on Robotics and AI, setting out the opportunities and implications for the future of decision making. In a report which spans government deployment, ethics and the labour market, Digital Minister Matt Hancock provided a foreword which pushed AI as a technology which would benefit the economy and UK citizens.

 

 

 

 

MIT’s “Moral Machine” Lets You Decide Who Lives & Dies in Self-Driving Car Crashes — from futurism.com

In brief:

  • MIT’S 13-point exercise lets users weigh the life-and-death decisions that self-driving cars could face in the future.
  • Projects like the “Moral Machine” give engineers insight into how they should code complex decision-making capabilities into AI.

 

 

Wearable Tech Weaves Its Way Into Learning — from edsurge.com by Marguerite McNeal

Excerpt:

“Ethics often falls behind the technology,” says Voithofer of Ohio State. Personal data becomes more abstract when it’s combined with other datasets or reused for multiple purposes, he adds. Say a device collects and anonymizes data about a student’s emotional patterns. Later on that information might be combined with information about her test scores and could be reassociated with her. Some students might object to colleges making judgments about their academic performance from indirect measurements of their emotional states.

 

 

New era of ‘cut and paste’ humans close as man injected with genetically-edited blood – from telegraph.co.uk by Sarah Knapton

Excerpt:

A world where DNA can be rewritten to fix deadly diseases has moved a step closer after scientists announced they had genetically-edited the cells of a human for the first time using a groundbreaking technique.

A man in China was injected with modified immune cells which had been engineered to fight his lung cancer. Larger trials are scheduled to take place next year in the US and Beijing, which scientists say could open up a new era of genetic medicine.

The technique used is called Crispr, which works like tiny molecular scissors snipping away genetic code and replacing it with new instructions to build better cells.

 

 

 

Troubling Study Says Artificial Intelligence Can Predict Who Will Be Criminals Based on Facial Features — from theintercept.com by Sam Biddle

 

 

 

Artificial intelligence is quickly becoming as biased as we are — from thenextweb.com by Bryan Clark

 

 

 

A bug in the matrix: virtual reality will change our lives. But will it also harm us? — from theguardian.stfi.re
Prejudice, harassment and hate speech have crept from the real world into the digital realm. For virtual reality to succeed, it will have to tackle this from the start

Excerpt:

Can you be sexually assaulted in virtual reality? And can anything be done to prevent it? Those are a few of the most pressing ethical questions technologists, investors and we the public will face as VR grows.

 

 

 

Light Bulbs Flash “SOS” in Scary Internet of Things Attack — from fortune.com by Jeff John Roberts

 

 

 

How Big Data Transformed Applying to College — from slate.com by Cathy O’Neil
It’s made it tougher, crueler, and ever more expensive.

 

 

Not OK, Google — from techcrunch.com by Natasha Lomas

Excerpts (emphasis DSC):

The scope of Alphabet’s ambition for the Google brand is clear: It wants Google’s information organizing brain to be embedded right at the domestic center — i.e. where it’s all but impossible for consumers not to feed it with a steady stream of highly personal data. (Sure, there’s a mute button on the Google Home, but the fact you have to push a button to shut off the ear speaks volumes… )

In other words, your daily business is Google’s business.

“We’re moving from a mobile-first world to an AI-first world,” said CEO Sundar Pichai…

But what’s really not OK, Google is the seismic privacy trade-offs involved here. And the way in which Alphabet works to skate over the surface of these concerns.

 

What he does not say is far more interesting, i.e. that in order to offer its promise of “custom convenience” — with predictions about restaurants you might like to eat at, say, or suggestions for how bad the traffic might be on your commute to work — it is continuously harvesting and data-mining your personal information, preferences, predilections, peccadilloes, prejudices…  and so on and on and on. AI never stops needing data. Not where fickle humans are concerned. 

 

 

Welcome to a world without work — from by Automation and globalisation are combining to generate a world with a surfeit of labour and too little work

Excerpt:

A new age is dawning. Whether it is a wonderful one or a terrible one remains to be seen. Look around and the signs of dizzying technological progress are difficult to miss. Driverless cars and drones, not long ago the stuff of science fiction, are now oddities that can occasionally be spotted in the wild and which will soon be a commonplace in cities around the world.

 

From DSC:
I don’t see a world without work being good for us in the least. I think we humans need to feel that we are contributing to something. We need a purpose for living out our days here on Earth (even though they are but a vapor).  We need vision…goals to works towards as we seek to use the gifts, abilities, passions, and interests that the LORD gave to us.  The author of the above article would also add that work:

  • Is a source of personal identity
  • It helps give structure to our days and our lives
  • It offers the possibility of personal fulfillment that comes from being of use to others
  • Is a critical part of the glue that holds society together and smooths its operation

 

Over the last generation, work has become ever less effective at performing these roles. That, in turn, has placed pressure on government services and budgets, contributing to a more poisonous and less generous politics. Meanwhile, the march of technological progress continues, adding to the strain.

 

 

10 breakthrough technologies for 2016 — from technologyreview.com

Excerpts:

Immune Engineering
Genetically engineered immune cells are saving the lives of cancer patients. That may be just the start.

Precise Gene Editing in Plants
CRISPR offers an easy, exact way to alter genes to create traits such as disease resistance and drought tolerance.

Conversational Interfaces
Powerful speech technology from China’s leading Internet company makes it much easier to use a smartphone.

Reusable Rockets
Rockets typically are destroyed on their maiden voyage. But now they can make an upright landing and be refueled for another trip, setting the stage for a new era in spaceflight.

Robots That Teach Each Other
What if robots could figure out more things on their own and share that knowledge among themselves?

DNA App Store
An online store for information about your genes will make it cheap and easy to learn more about your health risks and predispositions.

SolarCity’s Gigafactory
A $750 million solar facility in Buffalo will produce a gigawatt of high-efficiency solar panels per year and make the technology far more attractive to homeowners.

Slack
A service built for the era of mobile phones and short text messages is changing the workplace.

Tesla Autopilot
The electric-vehicle maker sent its cars a software update that suddenly made autonomous driving a reality.

Power from the Air
Internet devices powered by Wi-Fi and other telecommunications signals will make small computers and sensors more pervasive

 

 

The 4 big ethical questions of the Fourth Industrial Revolution — from 3tags.org by the World Economic Forum

Excerpts:

We live in an age of transformative scientific powers, capable of changing the very nature of the human species and radically remaking the planet itself.

Advances in information technologies and artificial intelligence are combining with advances in the biological sciences; including genetics, reproductive technologies, neuroscience, synthetic biology; as well as advances in the physical sciences to create breathtaking synergies — now recognized as the Fourth Industrial Revolution.

Since these technologies will ultimately decide so much of our future, it is deeply irresponsible not to consider together whether and how to deploy them. Thankfully there is growing global recognition of the need for governance.

 

 

Scientists create live animals from artificial eggs in ‘remarkable’ breakthrough — from telegraph.co.uk by Sarah Knapton

 

 

 

Robot babies from Japan raise questions about how parents bond with AI — from singularityhub.com by Mark Robert Anderson

Excerpt:

This then leads to the ethical implications of using robots. Embracing a number of areas of research, robot ethics considers whether the use of a device within a particular field is acceptable and also whether the device itself is behaving ethically. When it comes to robot babies there are already a number of issues that are apparent. Should “parents” be allowed to choose the features of their robot, for example? How might parents be counseled when returning their robot baby? And will that baby be used again in the same form?

 

 

 

 

 

 

 

 

Amazon’s Vision of the Future Involves Cops Commanding Tiny Drone ‘Assistants’ — from gizmodo.com by Hudson Hongo

 

 

 

DARPA’s Autonomous Ship Is Patrolling the Seas with a Parasailing Radar — from technologyreview.com by Jamie Condliffe
Forget self-driving cars—this is the robotic technology that the military wants to use.

 

 

 

China’s policing robot: Cattle prod meets supercomputer — from computerworld.com by Patrick Thibodeau
China’s fastest supercomputers have some clear goals, namely development of its artificial intelligence, robotics industries and military capability, says the U.S.

 

 

Report examines China’s expansion into unmanned industrial, service, and military robotics systems

 

 

 

Augmented Reality Glasses Are Coming To The Battlefield — from popsci.com by Andrew Rosenblum
Marines will control a head-up display with a gun-mounted mouse

 

 

———-

Addendum on 12/2/16:

Regulation of the Internet of Things — from schneier.com by Bruce Schneier

Excerpt:

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

 

 

 

Tech for change – from jwtintelligence.com by Jade Perry

Excerpt (emphasis DSC):

At this year’s WIRED conference, technology entrepreneurs were dialed in to how they might tackle world issues.

Over two days of sessions in London, speakers celebrated technology’s latest developments in fields ranging from healthcare to finance, from energy to art. Throughout the event, the notion of ‘humane tech’ emerged to describe the ways in which technology is being used to improve the state of the world. Increasingly, technology startups are harnessing the latest advances, including virtual reality and mobile technologies, to solve societal problems and tackle real issues.

Alexandra Ivanovitch, founder of Simorga, presented the company’s mission to develop VR apps that combat prejudice. The work follows research from BeAnotherLab which demonstrated that racial and gender biases can be reduced using virtual reality. When a user experiences the world as someone else, essentially swapping their body for a different one, empathy increases and bias decreases. Stanford University’s Virtual Human Interaction Lab has created a similar experience in which participants encounter racism while embodying someone else. The project’s mission is reminiscent of Sandy Speaks, the chatbot that uses artificial intelligence to educate people about the Black Lives Matter movement.

In a world where 26% of countries are described as ‘not free,’ technology is increasingly coming into play to empower citizens.

Another human rights issue that technology can seek to tackle is access to education. With approximately 250 million children globally failing to learn even the basics, there is scope for teaching through a range of technologies.

In addition to social issues and human rights, technology is being used on a broader scale to fight climate change…

Renewable energies that transform the everyday were another key focus.

 
© 2024 | Daniel Christian