Cameras are Watching and Machines are Learning: The Beginning — from medium.com by Brian Brackeen
You better believe their eyes

This is a new series about cameras and their relationship to face recognition, machine learning, and how, in the future, the ways in which we interact with technology will be radically different.

Excerpt (emphasis DSC):

First, the data.
LDV Capital, a venture capital firm focussed on Visual Technologies, recently published a 19 page report thick with some pretty eye opening data around cameras.

Specifically, how many cameras we can expect to have watching us, what they are watching us for, and how those insights will be used.

According to their study, by 2022 there will be more than 44,354,881,622 (that’s 44 BILLION) cameras in use globally, collecting even more billions of images for visual collateral. This is incredible — but what’s interesting — is that most of these images will never be seen by human eyes.

 

 

 

From DSC:
Though the author asserts there will be great business opportunities surrounding this trend, I’m not sure that I’m comfortable with it. Embedded cameras everywhere…hmmm…what he calls a privilege (in the quote below), I see as an overstepping of boundaries.

We have the privilege of experiencing the actual evolution of a device that we have come to know as one thing, for all of our lives to this point, into something completely different, to the extent that the word “camera”, itself, is becoming outdated.

How do you feel about this trend?

 

 

 

The era of easily faked, AI-generated photos is quickly emerging — from qz.com by Dave Gershgorn

Excerpt (emphasis DSC):

Until this month, it seemed that GAN-generated images [where GAN stands for “generative adversarial networks”] that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

 

From DSC:
So AI can now generate realistic photos (i.e., image creation/manipulation). And then there’s Adobe’s VoCo Project, a sort of a Photoshop for audio manipulation plus other related technologies out there:

 

So I guess it’s like the first article concludes:

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

…and perhaps we’ll need to add, “we shouldn’t trust everything we hear either.” But how will the average person with average tools know the real deal? The concept of watermarking visuals/audio may be increasingly involved. From the ending of bbc.com article:

For its part, Adobe has talked of its customers using Voco to fix podcast and audio book recordings without having to rebook presenters or voiceover artists.

But a spokeswoman stressed that this did not mean its release was imminent.

“[It] may or may not be released as a product or product feature,” she told the BBC.

“No ship date has been announced.”

In the meantime, Adobe said it was researching ways to detect use of its software.

“Think about watermarking detection,” Mr Jin said at the demo, referring to a method used to hide identifiers in images and other media.

 

But again, we see that technology often races ahead. “Look at what we can do!”  But then the rest of society — such as developing laws, policies, questions about whether we should roll out such technologies, etc. — needs time to catch up. Morals and ethics do come into play here — as trust levels are most assuredly at stake.

Another relevant article/topic/example of this is listed below. (Though I’m not trying to say that we shouldn’t pursue self-driving cars. Rather, the topic serves as another example of technologies racing ahead while it takes a while for the rest of us/society to catch up with them).

 

 

 

Artificial Intelligence in Education: Where It’s At, Where It’s Headed — from gettingsmart.com by Cameron Paterson

Excerpt:

Artificial intelligence is predicted to fundamentally alter the nature of society by 2040. Investment in AI start-ups was estimated at $6-$9 billion in 2016, up from US$415 million four years earlier. While futurist Ray Kurzweil argues that AI will help us to address the grand challenges facing humanity, Elon Musk warns us that artificial intelligence will be our “biggest existential threat.” Others argue that artificial intelligence is the future of growth. Everything depends on how we manage the transition to this AI-era.

In 2016 the Obama administration released a national strategic plan for artificial intelligence and, while we do not all suddenly now need a plan for artificial intelligence, we do need to stay up to date on how AI is being implemented. Much of AI’s potential is yet to be realized, but AI is already running our lives, from Siri to Netflix recommendations to automated air traffic control. We all need to become more aware of how we are algorithmically shaped by our tools.

This Australian discussion paper on the implications of AI, automation and 21st-century skills, shows how AI will not just affect blue-collar truck drivers and cleaners, it will also affect white-collar lawyers and doctors. Automated pharmacy systems with robots dispensing medication exist, Domino’s pizza delivery by drone has already occurred, and a fully automated farm is opening in Japan.

 

Education reformers need to plan for our AI-driven future and its implications for education, both in schools and beyond. The never-ending debate about the sorts of skills needed in the future and the role of schools in teaching and assessing them is becoming a whole lot more urgent and intense.

 

 

 

AI Experts Want to End ‘Black Box’ Algorithms in Government — from wired.com by Tom Simonite

Excerpt:

The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.

Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.

The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.

“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.

 

 

UAE appoints first-ever Minister for Artificial Intelligence — from tribune.com.pk

 

“We announce the appointment of a minister for artificial intelligence. The next global wave is artificial intelligence and we want the UAE to be more prepared for it.”

 

 

Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent — from nytimes.com by Cade Metz
Nearly all big tech companies have an artificial intelligence project, and they are willing to pay experts millions of dollars to help get it done.

Excerpt:

Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles. As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

With so few A.I. specialists available, big tech companies are also hiring the best and brightest of academia. In the process, they are limiting the number of professors who can teach the technology.

 

 

 

Where will AI play? By Mike Quindazzi.

 

 

 

 

10 really hard decisions coming our way — from gettingsmart.com by Tom Vander Ark

Excerpt (emphasis DSC):

Things are about to get interesting. You’ve likely heard that Google’s DeepMind recently beat the world’s best Go player. But in far more practical and pervasive ways, artificial intelligence (AI) is creeping into every aspect of life–every screen you view, every search, every purchase, and every customer service contact.

What’s happening? It’s the confluence of several technologies–Moore’s law made storage, computing, and access devices almost free.

This Venn diagram illustrates how deep learning is a subset of AI and how, when combined with big data, can inform enabling technologies in many sectors. For examples, to AI and big data add:

  • Robotics, and you have industry 4.0.
  • Cameras and sensor package, and you have self-driving cars.
  • Sensors and bioinformatic maps, and you have precision medicine.

While there is lots of good news here–diseases will be eradicated and clean energy will be produced–we have a problem: this stuff is moving faster than civic infrastructure can handle. Innovation is outpacing public policy on all fronts. The following are 10 examples of issues coming at us fast that we (in the US in particular) are not ready to deal with.

  1. Unemployment.
  2. Income inequality.
  3. Privacy
  4. Algorithmic bias.
  5. Access.
  6. Machine ethics. 
  7. Weaponization. 
  8. Humanity. 
  9. Genome editing.
  10. Bad AI.

 


From DSC:
Readers of this blog will know that I’m big on pulse-checking the pace of technological change — because it has enormous ramifications for societies throughout the globe, as well as for individuals, workforces, corporations, jobs, education, training, higher education and more. Readers of this blog will again hear me say that the pace of change has changed. We’re now on an exponential pace/trajectory (vs. a slow, steady, linear path).

“Innovation is outpacing public policy on all fronts.”

How true this is. Our society doesn’t know how to deal with this new pace of change. How shall we tackle this thorny issue?

 


 

 

 

 

From DSC:
I know Quentin Schultze from our years working together at Calvin College, in Grand Rapids, Michigan (USA). I have come to greatly appreciate Quin as a person of faith, as an innovative/entrepreneurial professor, as a mentor to his former students, and as an excellent communicator. 

Quin has written a very concise, wisdom-packed book that I would like to recommend to those people who are seeking to be better communicators, leaders, and servants. But I would especially like to recommend this book to the leadership at Google, Amazon, Apple, Microsoft, IBM, Facebook, Nvidia, the major companies developing robots, and other high-tech companies. Why do I list these organizations? Because given the exponential pace of technological change, these organizations — and their leaders — have an enormous responsibility to make sure that the technologies that they are developing result in positive changes for societies throughout the globe. They need wisdom, especially as they are working on emerging technologies such as Artificial Intelligence (AI), personal assistants and bots, algorithms, robotics, the Internet of Things, big data, blockchain and more. These technologies continue to exert an increasingly powerful influence on numerous societies throughout the globe today. And we haven’t seen anything yet! Just because we can develop and implement something, doesn’t mean that we should. Again, we need wisdom here.

But as Quin states, it’s not just about knowledge, the mind and our thoughts. It’s about our hearts as well. That is, we need leaders who care about others, who can listen well to others, who can serve others well while avoiding gimmicks, embracing diversity, building trust, fostering compromise and developing/exhibiting many of the other qualities that Quin writes about in his book. Our societies desperately need leaders who care about others and who seek to serve others well.

I highly recommend you pick up a copy of Quin’s book. There are few people who can communicate as much in as few words as Quin can. In fact, I wish that more writing on the web and more articles/research coming out of academia would be as concisely and powerfully written as Quin’s book, Communicate Like a True Leader: 30 Days of Life-Changing Wisdom.

 

 

To lead is to accept responsibility and act responsibly.
Quentin Schultze

 

 

 

 

“An algorithm designed badly can go on for a long time, silently wreaking havoc.”

— Cathy O’Neil

 

 

 

Cathy O’Neil: The era of blind faith in big data must end | TED Talk | TED.com

Description:
Algorithms decide who gets a loan, who gets a job interview, who gets insurance and much more — but they don’t automatically make things fair. Mathematician and data scientist Cathy O’Neil coined a term for algorithms that are secret, important and harmful: “weapons of math destruction.” Learn more about the hidden agendas behind the formulas.

 

 

 



Addendum:

As AI Gets Smarter, Scholars Raise Ethics Questions — from by by Chris Hayhurst
Interdisciplinary artificial intelligence research fosters philosophical discussions.

Excerpt (emphasis DSC):

David Danks, head of the philosophy department at Carnegie Mellon University, has a message for his colleagues in the CMU robotics department: As they invent and develop the technologies of the future, he encourages them to consider the human dimensions of their work.

His concern? All too often, Danks says, technological innovation ignores the human need for ethical guidelines and moral standards. That’s especially true when it comes to innovations such as artificial intelligence and automation, he says.

“It’s, ‘Look at this cool technology that we’ve got. How can you stand in the way of something like this?’” says Danks. “We should be saying, ‘Wait a second. How is this technology affecting people?’”

As an example, Danks points to AI-powered medical diagnostic systems. Such tools have great potential to parse data for better decision-making, but they lack the social interaction between patient and physician that can be so important to those decisions. It’s one thing to have a technology that can diagnose a patient with strep throat and recommend a certain antibiotic, but what about a patient with cancer who happens to be a professional violinist?

“For most people, you’d just give them the most effective drug,” says Danks. “But what do you do if one of the side effects of that medication is hand tremors? I see a lot of possibilities with AI, but it’s also important to recognize the challenges.”



 

 

From DSC:
I’ve been thinking about Applicant Tracking Systems (ATSs) for a while now, but the article below made me revisit my reflections on them. (By the way, my thoughts below are not meant to be a slam on Google. I like Google and I use their tools daily.) I’ve included a few items below, but there were some other articles/vendors’ products that I had seen on this topic that focused specifically on ATSs, but I couldn’t locate them all.

 

How Google’s AI-Powered Job Search Will Impact Companies And Job Seekers — from forbes.com by Forbes Coaches Council

Excerpt:

In mid-June, Google announced the implementation of an AI-powered search function aimed at connecting job seekers with jobs by sorting through posted recruitment information. The system allows users to search for basic phrases, such as “jobs near me,” or perform searches for industry-specific keywords. The search results can include reviews from Glassdoor or other companies, along with the details of what skills the hiring company is looking to acquire.

As this is a relatively new development, what the system will mean is still an open question. To help, members from the Forbes Coaches Council offer their analysis on how the search system will impact candidates or companies. Here’s what they said…

 

 

5. Expect competition to increase.
Google jumping into the job search market may make it easier than ever to apply for a role online. For companies, this could likely tax the already strained-ATS system, and unless fixed, could mean many more resumes falling into that “black hole.” For candidates, competition might be steeper than ever, which means networking will be even more important to job search success. – Virginia Franco

 

 

10. Understanding keywords and trending topics will be essential.
Since Google’s AI is based on crowd-gathered metrics, the importance of keywords and understanding trending topics is essential for both employers and candidates. Standing out from the crowd or getting relevant results will be determined by how well you speak the expected language of the AI. Optimizing for the search engine’s results pages will make or break your search for a job or candidate. – Maurice Evans, IGROWyourBiz, Inc 

 

 

Also see:

In Unilever’s radical hiring experiment, resumes are out, algorithms are in — from foxbusiness.com by Kelsey Gee 

Excerpt:

Before then, 21-year-old Ms. Jaffer had filled out a job application, played a set of online games and submitted videos of herself responding to questions about how she’d tackle challenges of the job. The reason she found herself in front of a hiring manager? A series of algorithms recommended her.

 

 

The Future of HR: Is it Dying? — from hrtechnologist.com by Rhucha Kulkarni

Excerpt (emphasis DSC):

The debate is on, whether man or machine will win the race, as they are pitted against each other in every walk of life. Experts are already worried about the social disruption that is inevitable, as artificial intelligence (AI)-led robots take over the jobs of human beings, leaving them without livelihoods. The same is believed to happen to the HR profession, says a report by Career Builder. HR jobs are at threat, like all other jobs out there, as we can expect certain roles in talent acquisition, talent management, and mainstream business being automated over the next 10 years. To delve deeper into the imminent problem, Career Builder carried out a study of 719 HR professionals in the private sector, specifically looking for the rate of adoption of emerging technologies in HR and what HR professionals perceived about it.

The change is happening for real, though different companies are adopting technologies at varied paces. Most companies are turning to the new-age technologies to help carry out talent acquisition and management tasks that are time-consuming and labor-intensive.

 

 

 

From DSC:
Are you aware that if you apply for a job at many organizations nowadays, your resume has a significant chance of not ever making it in front of a human’s eyeballs for their review?  Were you aware that an Applicant Tracking System (an ATS) will likely syphon off and filter out your resume unless you have the exact right keywords in your resume and unless you mentioned those keywords the optimal number of times?

And were you aware that many advisors assert that you should use a 1 page resume — a 2 page resume at most? Well…assuming that you have to edit big time to get to a 1-2 page resume, how does that editing help you get past the ATSs out there? When you significantly reduce your resume’s size/information, you hack out numerous words that the ATS may be scanning for. (BTW, advisors recommend creating a Wordle from the job description to ascertain the likely keywords; but still, you don’t know which exact keywords the ATS will be looking for in your specific case/job application and how many times to use those keywords. Numerous words can be of similar size in the resulting Wordle graphic…so is that 1-2 page resume helping you or hurting you when you can only submit 1 resume for a position/organization?)

Vendors are hailing these ATS systems as being major productivity boosters for their HR departments…and that might be true in some cases. But my question is, at what cost?

At this point in time, I still believe that humans are better than software/algorithms at making judgement calls. Perhaps I’m giving hiring managers too much credit, but I’d rather have a human being make the call at this point. I want a pair of human eyeballs to scan my resume, not a (potentially) narrowly defined algorithm. A human being might see transferable skills better than a piece of code at this point.

Just so you know…in light of these keyword-based means of passing through the first layer of filtering, people are now playing games with their resumes and are often stretching the truth — if not outright lying:

 

85 Percent of Job Applicants Lie on Resumes. Here’s How to Spot a Dishonest Candidate — from inc.com by  J.T. O’Donnell
A new study shows huge increase in lies on job applications.

Excerpt (emphasis DSC):

Employer Applicant Tracking Systems Expect an Exact Match
Most companies use some form of applicant tracking system (ATS) to take in résumés, sort through them, and narrow down the applicant pool. With the average job posting getting more than 100 applicants, recruiters don’t want to go bleary-eyed sorting through them. Instead, they let the ATS do the dirty work by telling it to pass along only the résumés that match their specific requirements for things like college degrees, years of experience, and salary expectations. The result? Job seekers have gotten wise to the finicky nature of the technology and are lying on their résumés and applications in hopes of making the cut.

 

From DSC:
I don’t see this as being very helpful. But perhaps that’s because I don’t like playing games with people and/or with other organizations. I’m not a game player. What you see is what you get. I’ll be honest and transparent about what I can — and can’t — deliver.

But students, you should know that these ATS systems are in place. Those of us in higher education should know about these ATS systems, as many of us are being negatively impacted by the current landscape within higher education.

 

 

Also see:

Why Your Approach To Job Searching Is Failing — from forbes.com by Jeanna McGinnis

Excerpt:

Is Your Resume ATS Friendly?
Did you know that an ATS (applicant tracking system) will play a major role in whether or not your resume is selected for further review when you’re applying to opportunities through online job boards?

It’s true. When you apply to a position a company has posted online, a human usually isn’t the first to review your resume, a computer program is. Scouring your resume for keywords, terminology and phrases the hiring manager is targeting, the program will toss your resume if it can’t understand the content it’s reading. Basically, your resume doesn’t stand a chance of making it to the next level if it isn’t optimized for ATS.

To ensure your resume makes it past the evil eye of ATS, format your resume correctly for applicant tracking programs, target it to the opportunity and check for spelling errors. If you don’t, you’re wasting your time applying online.

 

Con Job: Hackers Target Millennials Looking for Work – from wsj.com by Kelsey Gee
Employment scams pose a growing threat as applications and interviews become more digital

Excerpt:

Hackers attempt to hook tens of thousands of people like Mr. Latif through job scams each year, according to U.S. Federal Trade Commission data, aiming to trick them into handing over personal or sensitive information, or to gain access to their corporate networks.

Employment fraud is nothing new, but as more companies shift to entirely-digital job application processes, Better Business Bureau director of communications Katherine Hutt said scams targeting job seekers pose a growing threat. Job candidates are now routinely invited to fill out applications, complete skill evaluations and interview—all on their smartphones, as employers seek to cast a wider net for applicants and improve the matchmaking process for entry-level hires.

Young people are a frequent target. Of the nearly 3,800 complaints the nonprofit has received from U.S. consumers on its scam report tracker in the past two years, people under 34 years old were the most susceptible to such scams, which frequently offer jobs requiring little to no prior experience, Ms. Hutt said.

 

 

Hackers are finding new ways to prey on young job seekers.

 

 

 

A leading Silicon Valley engineer explains why every tech worker needs a humanities education — from qz.com by Tracy Chou

Excerpts:

I was no longer operating in a world circumscribed by lesson plans, problem sets and programming assignments, and intended course outcomes. I also wasn’t coding to specs, because there were no specs. As my teammates and I were building the product, we were also simultaneously defining what it should be, whom it would serve, what behaviors we wanted to incentivize amongst our users, what kind of community it would become, and what kind of value we hoped to create in the world.

I still loved immersing myself in code and falling into a state of flow—those hours-long intensive coding sessions where I could put everything else aside and focus solely on the engineering tasks at hand. But I also came to realize that such disengagement from reality and societal context could only be temporary.

At Quora, and later at Pinterest, I also worked on the algorithms powering their respective homefeeds: the streams of content presented to users upon initial login, the default views we pushed to users. It seems simple enough to want to show users “good” content when they open up an app. But what makes for good content? Is the goal to help users to discover new ideas and expand their intellectual and creative horizons? To show them exactly the sort of content that they know they already like? Or, most easily measurable, to show them the content they’re most likely to click on and share, and that will make them spend the most time on the service?

 

Ruefully—and with some embarrassment at my younger self’s condescending attitude toward the humanities—I now wish that I had strived for a proper liberal arts education. That I’d learned how to think critically about the world we live in and how to engage with it. That I’d absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish I’d even realized that these were worthwhile thoughts to fill my mind with—that all of my engineering work would be contextualized by such subjects.

It worries me that so many of the builders of technology today are people like me; people haven’t spent anywhere near enough time thinking about these larger questions of what it is that we are building, and what the implications are for the world.

 

 


Also see:


 

Why We Need the Liberal Arts in Technology’s Age of Distraction — from time.com by Tim Bajarin

Excerpt:

In a recent Harvard Business Review piece titled “Liberal Arts in the Data Age,” author JM Olejarz writes about the importance of reconnecting a lateral, liberal arts mindset with the sort of rote engineering approach that can lead to myopic creativity. Today’s engineers have been so focused on creating new technologies that their short term goals risk obscuring unintended longterm outcomes. While a few companies, say Intel, are forward-thinking enough to include ethics professionals on staff, they remain exceptions. At this point all tech companies serious about ethical grounding need to be hiring folks with backgrounds in areas like anthropology, psychology and philosophy.

 

 

 

 

The Internet’s future is more fragile than ever, says one of its inventors — from fastcompany.com by Sean Captain
Vint Cerf, the co-creator of tech that makes the internet work, worries about hacking, fake news, autonomous software, and perishable digital history.

Excerpts:

The term “digital literacy” is often referred to as if you can use a spreadsheet or a text editor. But I think digital literacy is closer to looking both ways before you cross the street. It’s a warning to think about what you’re seeing, what you’re hearing, what you’re doing, and thinking critically about what to accept and reject . . . Because in the absence of this kind of critical thinking, it’s easy to see how the phenomena that we’re just now labeling fake news, alternative facts [can come about]. These [problems] are showing up, and they’re reinforced in social media.

What are the criteria that we should apply to devices that are animated by software, and which we rely upon without intervention? And this is the point where autonomous software becomes a concern, because we turn over functionality to a piece of code. And dramatic examples of that are self-driving cars . . . Basically you’re relying on software doing the right things, and if it doesn’t do the right thing, you have very little to say about it.

I feel like we’re moving into a kind of fragile future right now that we should be much more thoughtful about improving, that is to say making more robust.

 

 

Imagine a house that stops working when the internet connection goes away. That’s not acceptable.

 

 

 

 

AI will make forging anything entirely too easy — from wired.com by Greg Allen

Excerpt:

Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.

But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.

We have grown comfortable with a future in which analytics, big data, and machine learning help us to monitor reality and discern the truth. Far less attention has been paid to how these technologies can also help us to lie. Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. In the future, realistic-looking and -sounding fakes will constantly confront people. Awash in audio, video, images, and documents, many real but some fake, people will struggle to know whom and what to trust.

 

 

Also referenced in the above article:

 

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

Tech giants grapple with the ethical concerns raised by the AI boom — from technologyreview.com by Tom Simonite
As machines take over more decisions from humans, new questions about fairness, ethics, and morality arise.

Excerpt:

With great power comes great responsibility—and artificial-intelligence technology is getting much more powerful. Companies in the vanguard of developing and deploying machine learning and AI are now starting to talk openly about ethical challenges raised by their increasingly smart creations.

“We’re here at an inflection point for AI,” said Eric Horvitz, managing director of Microsoft Research, at MIT Technology Review’s EmTech conference this week. “We have an ethical imperative to harness AI to protect and preserve over time.”

Horvitz spoke alongside researchers from IBM and Google pondering similar issues. One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 
© 2025 | Daniel Christian