FutureProofYourself-MS-FutureLab-Aug2016

 

Future proof yourselves — from Microsoft & The Future Laboratory

Excerpt (emphasis DSC):

Executive Summary
Explore the world of work in 2025 in a revealing evidence-based report by future consultants The Future Laboratory and Microsoft, which identifies and investigates ten exciting, inspiring and astounding jobs for the graduates of tomorrow – but that don’t exist yet.

Introduction
Tomorrow’s university graduates will be taking a journey into the professional unknown guided by a single, mind-blowing statistic: 65% of today’s students will be doing jobs that don’t even exist yet.

Technological change, economic turbulence and societal transformation are disrupting old career certainties and it is increasingly difficult to judge which degrees and qualifications will be a passport to a well-paid and fulfilling job in the decades ahead.

A new wave of automation, with the advent of true artificial intelligence, robots and driverless cars, threatens the future of traditional jobs, from truck drivers to lawyers and bankers.

But, by 2025, this same technological revolution will open up inspiring and exciting new career opportunities in sectors that are only in their infancy today.

The trick for graduates is to start to develop the necessary skills today in order to ensure they future proof their careers.

This report by future consultants The Future Laboratory attempts to show them how to do just that in a research collaboration with Microsoft, whose Surface technology deploys the precision and versatility of pen and touch to power creative industries ranging from graphic design and photography to architecture and engineering.

In this study, we use extensive desk research and in-depth interviews with technologists, academics, industry commentators and analysts to unveil 10 new creative job categories that will be recruiting tomorrow’s university students.

These future jobs demonstrate a whole new world of potential applications for the technology of today, as we design astonishing virtual habitats and cure deadly diseases from the comfort of our own sofas. It is a world that will need a new approach to training and career planning.

Welcome to tomorrow’s jobs…

 

 

65% of today’s students will be doing jobs that don’t even exist yet.

 

 

One of the jobs mentioned was the Ethical Technology Advocate — check out this video clip:

Ethical-Technology-Advocate-MS-Aug2016-

 

“Over the next decade, the long-awaited era of robots will dawn and become part of everyday life. It will be important to set out the moral and ethical rules under which they operate…”

 

 

 

 

IBM made a ‘crash course’ for the White House, and it’ll teach you all the AI basics — from futurism.com by Ramon Perez

Summary:

With the current AI revolution, comes a flock of skeptics. Alarmed of what AI could be in the near future, the White House released a Notice of Request For Information (RFI) on it. In response, IBM has created what seems to be an AI 101, giving a good sense of the current state, future, and risks of AI.

 

 

Also see:

 

FedGovt-Request4Info-June2016

 

 

 

Gartner reveals the top 3 emerging technologies from 2016 — from information-age.com by Nicholas Ismail
Technology is advancing at such a rapid rate that businesses are almost being forced to embrace emerging technologies in order to stay competitive

Excerpt:

Emerging technologies are fast becoming the tools with the highest priority for organisations facing rapidly accelerating digital business innovation.

Gartner’s Hype Cycle for Emerging Technologies, 2016 has selected three distinct technology trends – out of 2,000 – that organisations should track and begin to implement in order to stay competitive.

Their selection was based on what technologies will have the most impact and lead to the most competitive advantage, while establishing when these big technologies are going to mature (early stage or saturating).

Gartner’s research director Mike Walker said the hype cycle specifically focuses on the set of technologies that are showing promise in delivering a high degree of competitive advantage over the next five to ten years.

Information Age spoke to Mike Walker to gain a further insight into these three technologies, and their future business applications.

 

 

Smart machine technologies will be the most disruptive class of technologies over the next 10 years, including smart robots, autonomous cars and smart workspaces

 

 

 


From DSC:
The articles below demonstrate why the need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!



 

Ethics-Robots-NYTimes-July2016

What Ethics Should Guide the Use of Robots in Policing? — from nytimes.com

 

 

11 Police Robots Patrolling Around the World — from wired.com

 

 

Police use of robot to kill Dallas shooting suspect is new, but not without precursors — from techcrunch.com

 

 

What skills will human workers need when robots take over? A new algorithm would let the machines decide — from qz.com

 

 

The impact on jobs | Automation and anxiety | Will smarter machines cause mass unemployment? — from economist.com

 

 

 

 

VRTO Spearheads Code of Ethics on Human Augmentation — from vrfocus.com
A code of ethics is being developed for both VR and AR industries.

 

 

 

Google and Microsoft Want Every Company to Scrutinize You with AI — from technologyreview.com by Tom Simonite
The tech giants are eager to rent out their AI breakthroughs to other companies.

 

 

U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities — from pewinternet.org by Cary Funk, Brian Kennedy and Elizabeth Podrebarac Sciupac
Americans are more worried than enthusiastic about using gene editing, brain chip implants and synthetic blood to change human capabilities

 

 

Human Enhancement — from pewinternet.org by David Masci
The Scientific and Ethical Dimensions of Striving for Perfection

 

 

Robolliance focuses on autonomous robotics for security and survelliance — from robohub.org by Kassie Perlongo

 

 

Company Unveils Plans to Grow War Drones from Chemicals — from interestingengineering.com

 

 

The Army’s Self-Driving Trucks Hit the Highway to Prepare for Battle — from wired.com

 

 

Russian robots will soon replace human soldiers — from interestingengineering.com

 

 

Unmanned combat robots beginning to appear — from therobotreport.com

 

 

Law-abiding robots? What should the legal status of robots be? — from robohub.org by Anders Sandberg

Excerpt:

News media are reporting that the EU is considering turning robots into electronic persons with rights and apparently industry spokespeople are concerned that Brussels’ overzealousness could hinder innovation.

The report is far more sedate. It is a draft report, not a bill, with a mixed bag of recommendations to the Commission on Civil Law Rules on Robotics in the European Parliament. It will be years before anything is decided.

Nevertheless, it is interesting reading when considering how society should adapt to increasingly capable autonomous machines: what should the legal and moral status of robots be? How do we distribute responsibility?

A remarkable opening
The report begins its general principles with an eyebrow-raising paragraph:

whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;

It is remarkable because first it alludes to self-aware robots, presumably moral agents – a pretty extreme and currently distant possibility – then brings up Isaac Asimov’s famous but fictional laws of robotics and makes a simultaneously insightful and wrong-headed claim.

 

 

Robots are getting a sense of self-doubt — from popsci.com by Dave Gershgorn
Introspection is the key to growth

Excerpt:

That murmur is self-doubt, and its presence helps keep us alive. But robots don’t have this instinct—just look at the DARPA Robotics Challenge. But for robots and drones to exist in the real world, they need to realize their limits. We can’t have a robot flailing around in the darkness, or trying to bust through walls. In a new paper, researchers at Carnegie Mellon are working on giving robots introspection, or a sense of self-doubt. By predicting the likelihood of their own failure through artificial intelligence, robots could become a lot more thoughtful, and safer as well.

 

 

Scientists Create Successful Biohybrid Being Using 3-D Printing and Genetic Engineering — from inc.com by Lisa Calhoun
Scientists genetically engineered and 3-D-printed a biohybrid being, opening the door further for lifelike robots and artificial intelligence

Excerpt:

If you met this lab-created critter over your beach vacation, you’d swear you saw a baby ray. In fact, the tiny, flexible swimmer is the product of a team of diverse scientists. They have built the most successful artificial animal yet. This disruptive technology opens the door much wider for lifelike robots and artificial intelligence.

From DSC:
I don’t think I’d use the term disruptive here — though that may turn out to be the case.  The word disruptive doesn’t come close to carrying/relaying the weight and seriousness of this kind of activity; nor does it point out where this kind of thing could lead to.

 

 

Pokemon Go’s digital popularity is also warping real life — from finance.yahoo.com by Ryan Nakashima and David Hamilton

Excerpt (emphasis DSC):

Todd Richmond, a director at the Institute for Creative Technologies at the University of Southern California, says a big debate is brewing over who controls digital assets associated with real world property.

“This is the problem with technology adoption — we don’t have time to slowly dip our toe in the water,” he says. “Tenants have had no say, no input, and now they’re part of it.”

 

From DSC:
I greatly appreciate what Pokémon Go has been able to achieve and although I haven’t played it, I think it’s great (great for AR, great for peoples’ health, great for the future of play, etc.)!   So there are many positives to it. But the highlighted portion above is not something we want to have to say occurred with artificial intelligence, cognitive computing, some types of genetic engineering, corporations tracking/using your personal medical information or data, the development of biased algorithms, etc.  

 

 

Right now, artificial intelligence is the only thing that matters: Look around you — from forbes.com by Enrique Dans

Excerpts:

If there’s one thing the world’s most valuable companies agree on, it’s that their future success hinges on artificial intelligence.

In short, CEO Sundar Pichai wants to put artificial intelligence everywhere, and Google is marshaling its army of programmers into the task of remaking itself as a machine learning company from top to bottom.

Microsoft won’t be left behind this time. In a great interview a few days ago, its CEO, Satya Nadella says he intends to overtake Google in the machine learning race, arguing that the company’s future depends on it, and outlining a vision in which human and machine intelligence work together to solve humanity’s problems. In other words, real value is created when robots work for people, not when they replace them.

And Facebook? The vision of its founder, Mark Zuckerberg, of the company’s future, is one in which artificial intelligence is all around us, carrying out or helping to carry out just about any task you can think of…

 

The links I have included in this column have been carefully chosen as recommended reading to support my firm conviction that machine learning and artificial intelligence are the keys to just about every aspect of life in the very near future: every sector, every business.

 

 

 

10 jobs that A.I. and chatbots are poised to eventually replace — from venturebeat.com by Felicia Schneiderhan

Excerpt:

If you’re a web designer, you’ve been warned.

Now there is an A.I. that can do your job. Customers can direct exactly how their new website should look. Fancy something more colorful? You got it. Less quirky and more professional? Done. This A.I. is still in a limited beta but it is coming. It’s called The Grid and it came out of nowhere. It makes you feel like you are interacting with a human counterpart. And it works.

Artificial intelligence has arrived. Time to sharpen up those resumes.

 

 

Augmented Humans: Next Great Frontier, or Battleground? — from nextgov.com by John Breeden

Excerpt:

It seems like, in general, technology always races ahead of the moral implications of using it. This seems to be true of everything from atomic power to sequencing genomes. Scientists often create something because they can, because there is a perceived need for it, or even by accident as a result of research. Only then does the public catch up and start to form an opinion on the issue.

Which brings us to the science of augmenting humans with technology, a process that has so far escaped the public scrutiny and opposition found with other radical sciences. Scientists are not taking any chances, with several yearly conferences already in place as a forum for scientists, futurists and others to discuss the process of human augmentation and the moral implications of the new science.

That said, it seems like those who would normally oppose something like this have remained largely silent.

 

 

Google Created Its Own Laws of Robotics — from fastcodesign.com by John Brownlee
Building robots that don’t harm humans is an incredibly complex challenge. Here are the rules guiding design at Google.

 

 

Google identifies five problems with artificial intelligence safety — from which-50.com

 

 

DARPA is giving $2 million to the person who creates an AI hacker — from futurism.com

 

 

 

rollsroyce-july2016

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part VI in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III, Part IV, Part V, and to ask:

  • How do we collectively start talking about the future that we want?
  • How do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved in these discussions? Shouldn’t each one of us participate in some way, shape, or form?

 

 

AIsWhiteGuyProblem-NYTimes-June2016

 

Artificial Intelligence’s White Guy Problem — from nytimes.com by Kate Crawford

Excerpt:

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

 

 

Facebook is using artificial intelligence to categorize everything you write — from futurism.com

Excerpt:

Facebook has just revealed DeepText, a deep learning AI that will analyze everything you post or type and bring you closer to relevant content or Facebook services.

 

 

March of the machines — from economist.com
What history tells us about the future of artificial intelligence—and how society should respond

Excerpt:

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

 

As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.

 

 

Backlash-Data-DefendantsFutures-June2016

 

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures — from nytimes.com by Mitch Smith

Excerpt:

CHICAGO — When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

 

 

Google Tackles Challenge of How to Build an Honest Robot — from bloomberg.com by

Excerpt:

Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.

The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.

But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.

 

 

Policy paper | Data Science Ethical Framework — from gov.uk
From: Cabinet Office, Government Digital Service and The Rt Hon Matt Hancock MP
First published: 19 May 2016
Part of: Government transparency and accountability

This framework is intended to give civil servants guidance on conducting data science projects, and the confidence to innovate with data.

Detail: Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking. We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with. The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog.

 

 

 

WhatsNextForAI-June2016

Excerpt (emphasis DSC):

We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.

The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values. If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws.

 

 

DARPA to Build “Virtual Data Scientist” Assistants Through A.I. — from inverse.com by William Hoffman
A.I. will make up for the lack of data scientists.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.

This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.

 

 

Robot that chooses to inflict pain sparks debate about AI systems — from interestingengineering.com by Maverick Baker

Excerpt:

A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.

The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.

 

 

The NSA wants to spy on the Internet of Things. Everything from thermostats to pacemakers could be mined for intelligence data. — from engadget.com by Andrew Dalton

Excerpt:

We already know the National Security Agency is all up in our data, but the agency is reportedly looking into how it can gather even more foreign intelligence information from internet-connected devices ranging from thermostats to pacemakers. Speaking at a military technology conference in Washington D.C. on Friday, NSA deputy director Richard Ledgett said the agency is “looking at it sort of theoretically from a research point of view right now.” The Intercept reports Ledgett was quick to point out that there are easier ways to keep track of terrorists and spies than to tap into any medical devices they might have, but did confirm that it was an area of interest.

 

 

The latest tool in the NSA’s toolbox? The Internet of Things — from digitaltrends.com by Lulu Chang

Excerpt:

You may love being able to set your thermostat from your car miles before you reach your house, but be warned — the NSA probably loves it too. On Friday, the National Security Agency — you know, the federal organization known for wiretapping and listening it on U.S. citizens’ conversations — told an audience at Washington’s Newseum that it’s looking into using the Internet of Things and other connected devices to keep tabs on individuals.

 


Addendum on 6/29/16:

 

Addendums on 6/30/16

 

Addendum on 7/1/16

  • Humans are willing to trust chatbots with some of their most sensitive information — from businessinsider.com by Sam Shead
    Excerpt:
    A study has found that people are inclined to trust chatbots with sensitive information and that they are open to receiving advice from these AI services. The “Humanity in the Machine” report —published by media agency Mindshare UK on Thursday — urges brands to engage with customers through chatbots, which can be defined as artificial intelligence programmes that conduct conversations with humans through chat interfaces.

 

 

 

 
 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

IBM Watson takes on cybercrime with new cloud-based cybersecurity technology — from techrepublic.com by Conner Forrest
Eight universities have begun a year-long initiative to train IBM Watson for work in cybersecurity. Will the Jeopardy champ soon police the internet?

IBM-Watson-Cbersecurity-May2016

Excerpt:

On Tuesday, IBM announced that Watson, its cognitive computing system (and former Jeopardy champion), will be spending the next year training for a new job—fighting cybercrime.

Watson for Cyber Security is a cloud-based version of IBM’s cognitive computing tools that will be the result of a one-year-long research project that is starting in the fall. Students and faculty from eight universities will participate in the research and train Watson to better understand how to detect potential threats.

 

 

Addendum on 5/12/16:

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish


From DSC:
This posting represents Part IV in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

 

The biggest mystery in AI right now is the ethics board that Google set up after buying DeepMind — from businessinsider.com by Sam Shead

Excerpt (emphasis DSC):

Google’s artificial intelligence (AI) ethics board, established when Google acquired London AI startup DeepMind in 2014, remains one of the biggest mysteries in tech, with both Google and DeepMind refusing to reveal who sits on it.

Google set up the board at DeepMind’s request after the cofounders of the £400 million research-intensive AI lab said they would only agree to the acquisition if Google promised to look into the ethics of the technology it was buying into.

A number of AI experts told Business Insider that it’s important to have an open debate about the ethics of AI given the potential impact it’s going to have on all of our lives.

 

 

 

Algorithms may save us from information overload, but are they the curators we want? — from newstatesman.com by Barbara Speed
Instagram is joining the legions of social networks which use algorithms to dictate what we see, and when we see it.

Excerpt:

We’ve entered the age of the algorithm.

In a way, it was inevitable: thanks to the rise of smartphones and social media, we’re surrounded by vast, unfiltered streams of information, dripped to us via “feeds” on sites like Facebook and Twitter. As a result, we needed something to tame all that information, because an unfiltered stream is about as useful as no information at all. So we turned to a type of algorithm which could help separate the signal from the noise: basically, a set of steps which would calculate which information should be prioritised, and which should be hidden.

It’s impossible to say that algorithms are “good” or “bad”, just as humanity isn’t overridingly either. Algorithms are designed by humans, and therefore carry forward whatever prejudice or bias they’re programmed to perform.

 

 

 

Internet of Things to be used as spy tool by governments: US intel chief  — from arstechnica.com by David Kravets
Clapper says spy agencies “might” use IoT for surveillance, location tracking.

Excerpt:

James Clapper, the US director of national intelligence, told lawmakers Tuesday that governments across the globe are likely to employ the Internet of Things as a spy tool, which will add to global instability already being caused by infectious disease, hunger, climate change, and artificial intelligence.

Clapper addressed two different committees on Tuesday—the Senate Armed Services Committee and the Senate Select Committee on Intelligence Committee—and for the first time suggested that the Internet of Things could be weaponized by governments. He did not name any countries or agencies in regard to the IoT, but a recent Harvard study suggested US authorities could harvest the IoT for spying purposes.

 

 

 

“GoPro” Anthropology — paying THEM to learn from US? — from Jason Ohler’s Big Ideas Series

Excerpt (emphasis DSC):

What’s the big idea?
Consumer research and individual learning assessment techniques will merge, using wearable technology that observes and records life from the wearer’s point of view. The recording technology will be invisible to the consumer and student, as well as to the public. Video feeds will be beamed to analysts, real time. Recordings will be analyzed and extrapolated by powerful big data driven analytics. For both consumers and students, research will be conducted for the same purpose: to provide highly individualized approaches to learning and sales. Mass customized learning and consumerism will take a huge step forward. So will being embedded in the surveillance culture.

Why would we submit to this? Because we are paid to? Perhaps.  But we may well pay them to watch us, to tell us about ourselves, to help us and our children learn better and faster in a high stakes testing culture, and to help us make smarter choices as consumers. Call it “keeping up with data-enhanced neighbors.” Numerous issues of privacy and security will be weighed against personal opportunity, as learners, consumers and citizens.

 

 

 

10 promising technologies assisting the future of medicine and healthcare — by Bertalan Meskó, MD, PhD

Excerpt (emphasis DSC):

Technology will not solve the problems that healthcare faces globally today. And the human touch alone is not enough any more, therefore a new balance is needed between using disruptive innovations but still keeping the human interaction between patients and caregivers. Here are 10 technologies and trends that could enable this.

I see enormous technological changes heading our way. If they hit us unprepared, which we are now, they will wash away the medical system we know and leave it a purely technology–based service without personal interaction. Such a complicated system should not be washed away. Rather, it should be consciously and purposefully redesigned piece by piece. If we are unprepared for the future, then we lose this opportunity. I think we are still in time and it is still possible.

The advances of technology do not have to mean the end of the human touch. Instead, the beginning of a new era when both are crucial.

 

 

 

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1 — from rollingstone.com by Jeff Goodell
We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 2 — from rollingstone.com by Jeff Goodell
Self-driving cars, war outsourced to robots, surgery by autonomous machines – this is only the beginning

 

 

Laser weapons ready for use today, Lockheed executives say — from defensenews.com by Aaron Mehta
The time has finally come where those weapons are capable of being fielded, according to a trio of Lockheed Martin executives who work on the development of the company’s laser arsenal.

 

 

 

Delivery Robot – Fresh Pizza With DRU From Domino. — from wtvox.com

From DSC:
How many jobs will be displaced here? How many college students — amongst many others — are going to be impacted here, as they try to make their way through (paying for) college? But don’t assume that it’s just lower level jobs that will be done away with…for example, see the next entry re: the legal profession.

 

 

New Report Predicts Over 100,000 Legal Jobs Will Be Lost To Automation — from futurism.com by
An extensive new analysis by Deloitte estimates that over 100,000 jobs will be lost to technological automation within the next two decades. Increasing technological advances have helped replace menial roles in the office and do repetitive tasks.

Excerpt:

A new analysis from Deloitte Insight states that within the next two decades, an estimated 114,000 jobs in the legal sector will have a high chance of having been replaced with automated machines and algorithms. The report predicts “profound reforms” across the legal profession with the 114,000 jobs representing over 39% of jobs in the legal sector.

These radical changes are spurred by the rapid pace of technological progress and the need to offer clients more value for their money. Automation and the increasing rise of millennials in the legal workplace also alter the nature of talent needed by law firms in the future.

 

 

 

Raffaello D’Andrea: Meet the dazzling flying machines of the future — from ted.com

Description:

When you hear the word “drone,” you probably think of something either very useful or very scary. But could they have aesthetic value? Autonomous systems expert Raffaello D’Andrea develops flying machines, and his latest projects are pushing the boundaries of autonomous flight — from a flying wing that can hover and recover from disturbance to an eight-propeller craft that’s ambivalent to orientation … to a swarm of tiny coordinated micro-quadcopters.

 

 

 

Addendum on 4/4/16:

The Scarlett Johansson Bot is the robotic future of objectifying women — from wired.com by April Glaser (From DSC: I’m not advocating this objectification of woman *at all*; rather I post this addendum  here because this is the kind of thing that we need to be aware of and talking about, or the future won’t be a dream…it will be a nightmare)

Excerpt:

The question, however, is one of precedent. If a man can’t earn the attention of the woman he longs for, is it plausible for that man to build a robot that looks exactly like his love interest instead? Is there any legal recourse to prevent someone from building a ScarJo bot, or Beyonce bot, or a bot of you? Sure, people make doll and wax replicas of famous people all the time. But the difference here is that Mark 1 moves, smiles, and winks.

 

 

How top liberal arts colleges prepare students for successful lives of leadership and service — from educationdive.com by John I. Williams, Jr.

Excerpt (emphasis DSC):

This year’s World Economic Forum (WEF) in Davos, Switzerland, discussed the top ten skills that will be needed for careers in 2020:

  1. Complex problem solving
  2. Critical thinking
  3. Creativity
  4. People management
  5. Coordinating with others
  6. Emotional intelligence
  7. Judgment and decision making
  8. Service orientation
  9. Negotiation
  10. Cognitive flexibility

The list is remarkable, both for what it includes and for what it doesn’t; and for the fact that it is as timeless as it is forward-looking. For our purposes, it serves as a useful gauge for the value of the education our students receive at highly-selective liberal arts colleges.

As I reflect upon the list, I realize graduates of top liberal arts colleges will smile as they read it, reminded that their education focuses on skills that will be valuable across a lifetime.

 

Going forward, college graduates may work for nine or more organizations over the course of their careers.

 

Yet, for all this techno-wizardry, the critical skills on WEF’s list for careers in 2020 resemble closely those that have defined the leaders who have emerged from top liberal arts colleges for decades. 

 

At the same time, top liberal arts colleges have always been committed to preparing students for more than just career success, including contributions to society more broadly. These colleges have always focused not only on the development of students’ intellect but on their character as well.

 

 

 

Interactive app brings 4th-century thinker to life — from campustechnology.com by Toni Fuhrman
At Villanova University, a student-developed app version of Augustine’s Confessions brings contemporary vitality and relevance to a classic 4th-century work.

Excerpt:

Augustine of Hippo, who lived from A.D. 354 to 430, might be surprised to find his Confessions in circulation today, including a number of e-book versions. Still widely read, popular in great books programs and studied in university classes, The Confessions of St. Augustine is autobiography and confession, spiritual quest and emotional journey.

One of the most recent electronic versions of the Confessions is an interactive app developed at Villanova University (PA), the nation’s only Augustinian Catholic University. Released three months ago on Augustine’s birthday (Nov. 13), the Confessions app is required for all freshmen as part of a “foundation” course. Available for both Apple and Android devices, the app includes the 13 books of the Confessions, authoritative commentaries, photo gallery, timeline, map and text-highlighted audio, as well as search, note-taking, annotation and bookmark options.

 

“What better way to reflect on and update this struggle than for today’s students to use technology to bring the text to life through visual, audio and analytical components?”

 

 

 

Confessions-Feb2016

 

 

From DSC:
Love the idea. Love the use of teams — including students — to produce this app!

 

 

 

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part III in a series of such postings that illustrate how quickly things are moving (Part I and Part II) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

As I mentioned in Part I, I want to again refer to Gerd Leonhard’s work as it is relevant here, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

Looking at several items below, ask yourself…is this the kind of future that we want?  There are some things mentioned below that could likely prove to be very positive and helpful. However, there are also some very troubling advancements and developments as well.

The point here is that we had better start talking and discussing the pros and cons of each one of these areas — and many more I’m not addressing here — or our dreams will turn into our nightmares and we will have missed what Edward Cornish and the World Future Society are often trying to get at.

 


 

Google’s Artificial Intelligence System Masters Game of ‘Go’ — from abcnews.go.com by Alyssa Newcomb

Excerpt:

Google just mastered one of the biggest feats in artificial intelligence since IBM’s Deep Blue beat Gary Kasparov at chess in 1997.

The search giant’s AlphaGo computer program swept the European champion of Go, a complex game with trillions of possible moves, in a five-game series, according Demis Hassabis, head of Google’s machine learning, who announced the feat in a blog post that coincided with an article in the journal Nature.

While computers can now compete at the grand master level in chess, teaching a machine to win at Go has presented a unique challenge since the game has trillions of possible moves.

Along these lines, also see:
Mastering the game of go with deep neural networks and tree search — from deepmind.com

 

 

 

Harvard is trying to build artificial intelligence that is as fast as the human brain — from futurism.com
Harvard University and IARPA are working together to study how AI can work as efficiently and effectively as the human brain.

Excerpt:

Harvard University has been given $28M by the Intelligence Advanced Projects Activity (IARPA) to study why the human brain is significantly better at learning and retaining information than artificial intelligence (AI). The investment into this study could potentially help researchers develop AI that’s faster, smarter, and more like human brains.

 

 

Digital Ethics: The role of the CIO in balancing the risks and rewards of digital innovation — from mis-asia.com by Kevin Wo; with thanks to Gerd Leonhard for this posting

What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing. More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalized customer experiences, as well as unprecedented productivity, efficiency, and innovation. However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.

According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things. Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.  

As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.

 

 

New algorithm points the way towards regrowing limbs and organs — from sciencealert.com by David Nield

Excerpt:

An international team of researchers has developed a new algorithm that could one day help scientists reprogram cells to plug any kind of gap in the human body. The computer code model, called Mogrify, is designed to make the process of creating pluripotent stem cells much quicker and more straightforward than ever before.

A pluripotent stem cell is one that has the potential to become any type of specialised cell in the body: eye tissue, or a neural cell, or cells to build a heart. In theory, that would open up the potential for doctors to regrow limbs, make organs to order, and patch up the human body in all kinds of ways that aren’t currently possible.

 

 

 

The world’s first robot-run farm will harvest 30,000 heads of lettuce daily — from techinsider.io by Leanna Garfield

Excerpt (from DSC):

The Japanese lettuce production company Spread believes the farmers of the future will be robots.

So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.

Don’t expect a bunch of humanoid robots to roam the halls, however; the robots look more like conveyor belts with arms. They’ll plant seeds, water plants, and trim lettuce heads after harvest in the Kyoto, Japan farm.

 

 

 

Drone ambulances may just be the future of emergency medical vehicles — from interestingengineering.com by Gabrielle Westfield

Excerpt:

Drones are advancing everyday. They are getting larger, faster and more efficient to control. Meanwhile the medical field keeps facing major losses from emergency response vehicles not being able to reach their destination fast enough. Understandable so, I mean especially in the larger cities where traffic is impossible to move swiftly through. Red flashing lights atop or not, sometimes the roads are just not capable of opening up. It makes total sense that the future of ambulances would be paved in the open sky rather than unpredictable roads.

.

 

 

 

Phone shop will be run entirely by Pepper robots — from telegraph.co.uk by

Excerpt (emphasis DSC):

Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.

The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.

 

 

 

Wise.io introduces first intelligent auto reply functionality for customer support organizations — from consumerelectronicsnet.com
Powered by Machine Learning, Wise Auto Response Frees Up Agent Time, Boosting Productivity, Accelerating Response Time and Improving the Customer Experience

Excerpt:

BERKELEY, CA — (Marketwired) — 01/27/16 — Wise.io, which delivers machine learning applications to help enterprises provide a better customer experience, today announced the availability of Wise Auto Response, the first intelligent auto reply functionality for customer support organizations. Using machine learning to understand the intent of an incoming ticket and determine the best available response, Wise Auto Response automatically selects and applies the appropriate reply to address the customer issue without ever involving an agent. By helping customer service teams answer common questions faster, Wise Auto Response removes a high percentage of tickets from the queue, freeing up agents’ time to focus on more complex tickets and drive higher levels of customer satisfaction.

 

 

Video game for treating ADHD looks to 2017 debut — from educationnews.org

Excerpt:

Akili Interactive Labs out of Boston has created a video game that they hope will help treat children diagnosed with attention-deficit hyperactivity disorder by teaching them to focus in a distracting environment.

The game, Project: EVO, is meant to be prescribed to children with ADHD as a medical treatment.  And after gaining $30.5 million in funding, investors appear to believe in it.  The company plans to use the funding to run clinical trials with plans to gain approval from the US Food and Drug Administration in order to be able to launch the game in late 2017.

Players will enter a virtual world filled with colorful distractions and be required to focus on specific tasks such as choosing certain objects while avoiding others.  The game looks to train the portion of the brain designed to manage and prioritize all the information taken in at one time.

 

Addendum on 1/29/16:

 

 

 

 

From DSC:
Below are some further items that discuss the need for some frameworks, policies, institutes, research, etc. that deal with a variety of game-changing technologies that are quickly coming down the pike (if they aren’t already upon on).  We need such things to help us create a positive future.

Also see Part I of this thread of thinking entitled, “The need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!  There have been so many other items that came out since that posting, I felt like I needed to add another one here.

What kind of future do we want? How are we going to insure that we get there?

As the saying goes…”Just because we can do something, doesn’t mean we should.” Or another saying comes to my mind…”What could possibly go wrong with this? It’s a done deal.”

While some of the items below should have very positive impacts on society, I do wonder how long it will take the hackers — the ones who are bent on wreaking havoc — to mess up some of these types of applications…with potentially deadly consequences? Security-related concerns must be dealt with here.


 

5 amazing and alarming things that may be done with your DNA — from washingtonpost.com by Matt McFarland

Excerpt (emphasis DSC):

Venter is leading efforts to use digital technology to analyze humans in ways we never have before, and the results will have huge implications for society. The latest findings he described are currently being written up for scientific publications. Venter didn’t want to usurp the publications, so he wouldn’t dive into extensive detail of how his team has made these breakthroughs. But what he did share offers an exciting and concerning overview of what lies ahead for humanity. There are social, legal and ethical implications to start considering. Here are five examples of how digitizing DNA will change the human experience:

 

 

These are the decisions the Pentagon wants to leave to robots — from defenseone.com by Patrick Tucker
The U.S. military believes its battlefield edge will increasingly depend on automation and artificial intelligence.

Excerpt:

Conducting cyber defensive operations, electronic warfare, and over-the-horizon targeting. “You cannot have a human operator operating at human speed fighting back at determined cyber tech,” Work said. “You are going to need have a learning machine that does that.” He did not say  whether the Pentagon is pursuing the autonomous or automatic deployment of offensive cyber capabilities, a controversial idea to be sure. He also highlighted a number of ways that artificial intelligence could help identify new waveforms to improve electronic warfare.

 

 

Britain should lead way on genetically engineered babies, says Chief Scientific Adviser — from.telegraph.co.uk by Sarah Knapton
Sir Mark Walport, who advises the government on scientific matters, said it could be acceptable to genetically edit human embryos

Excerpt:

Last week more than 150 scientists and campaigners called for a worldwide ban on the practice, claiming it could ‘irrevocably alter the human species’ and lead to a world where inequality and discrimination were ‘inscribed onto the human genome.’

But at a conference in London [on 12/8/15], Sir Mark Walport, who advises the government on scientific matters, said he believed there were ‘circumstances’ in which the genetic editing of human embyros could be ‘acceptable’.

 

 

Cyborg Future: Engineers Build a Chip That Is Part Biological and Part Synthetic — from futurism.com

Excerpt:

Engineers have succeeded in combining an integrated chip with an artificial lipid bilayer membrane containing ATP-powered ion pumps, paving the way for more such artificial systems that combine the biological with the mechanical down the road.

 

 

Robots expected to run half of Japan by 2035 — from engadget.com by Andrew Tarantola
Something-something ‘robot overlords’.

Excerpt:

Data analysts Nomura Research Institute (NRI), led by researcher Yumi Wakao, figure that within the next 20 years, nearly half of all jobs in Japan could be accomplished by robots. Working with Professor Michael Osborne from Oxford University, who had previously investigated the same matter in both the US and UK, the NRI team examined more than 600 jobs and found that “up to 49 percent of jobs could be replaced by computer systems,” according to Wakao.

 

 

 

Cambridge University is opening a £10 million centre to study the impact of AI on humanity — from businessinsider.com by Sam Shead

Excerpt:

Cambridge University announced on [12/3/15] that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

 

Cambridge-Center-Dec2015

 

 

Tech leaders launch nonprofit to save the world from killer robots — from csmonitor.com by Jessica Mendoza
Elon Musk, Sam Altman, and other tech titans have invested $1 billion in a nonprofit that would help direct artificial intelligence technology toward positive human impact. 

 

 

 

 

2016 will be a pivotal year for social robots — from therobotreport.com by Frank Tobe
1,000 Peppers are selling each month from a big-dollar venture between SoftBank, Alibaba and Foxconn; Jibo just raised another $16 million as it prepares to deliver 7,500+ units in Mar/Apr of 2016; and Buddy, Rokid, Sota and many others are poised to deliver similar forms of social robots.

Excerpt:

These new robots, and the proliferation of mobile robot butlers, guides and kiosks, promise to recognize your voice and face and help you plan your calendar, provide reminders, take pictures of special moments, text, call and videoconference, order fast food, keep watch on your house or office, read recipes, play games, read emotions and interact accordingly, and the list goes on. They are attempting to be analogous to a sharp administrative assistant that knows your schedule, contacts and interests and engages with you about them, helping you stay informed, connected and active.

 

 

IBM opens its artificial mind to the world — from fastcompany.com by Sean Captain
IBM is letting companies plug into its Watson artificial intelligence engine to make sense of speech, text, photos, videos, and sensor data.

Excerpt:

Artificial intelligence is the big, oft-misconstrued catchphrase of the day, making headlines recently with the launch of the new OpenAI organization, backed by Elon Musk, Peter Thiel, and other tech luminaries. AI is neither a synonym for killer robots nor a technology of the future, but one that is already finding new signals in the vast noise of collected data, ranging from weather reports to social media chatter to temperature sensor readings. Today IBM has opened up new access to its AI system, called Watson, with a set of application programming interfaces (APIs) that allow other companies and organizations to feed their data into IBM’s big brain for analysis.

 

 

GE wants to give industrial machines their own social network with Predix Cloud — from fastcompany.com by Sean Captain
GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.

 

 

Foresight 2020: The future is filled with 50 billion connected devices — from ibmbigdatahub.com by Erin Monday

Excerpt:

By 2020, there will be over 50 billion connected devices generating continuous data.

This figure is staggering, but is it really a surprise? The world has come a long way from 1992, when the number of computers was roughly equivalent to the population of San Jose. Today, in 2015, there are more connected devices out there than there are human beings. Ubiquitous connectivity is very nearly a reality. Every day, we get a little closer to a time where businesses, governments and consumers are connected by a fluid stream of data and analytics. But what’s driving all this growth?

 

 

Designing robots that learn as effortlessly as babies — from singularityhub.com by Shelly Fan

Excerpt:

A wide-eyed, rosy-cheeked, babbling human baby hardly looks like the ultimate learning machine.

But under the hood, an 18-month-old can outlearn any state-of-the-art artificial intelligence algorithm.

Their secret sauce?

They watch; they imitate; and they extrapolate.

Artificial intelligence researchers have begun to take notice. This week, two separate teams dipped their toes into cognitive psychology and developed new algorithms that teach machines to learn like babies. One instructs computers to imitate; the other, to extrapolate.

 

 

Researchers have found a new way to get machines to learn faster — from fortune.com by  Hilary Brueck

Excerpt:

An international team of data scientists is proud to announce the very latest in machine learning: they’ve built a program that learns… programs. That may not sound impressive at first blush, but making a machine that can learn based on a single example is something that’s been extremely hard to do in the world of artificial intelligence. Machines don’t learn like humans—not as fast, and not as well. And even with this research, they still can’t.

 

 

Team showcase how good Watson is at learning — from adigaskell.org

Excerpt:

Artificial intelligence has undoubtedly come a long way in the last few years, but there is still much to be done to make it intuitive to use.  IBM’s Watson has been one of the most well known exponents during this time, but despite it’s initial success, there are issues to overcome with it.

A team led by Georgia Tech are attempting to do just that.  They’re looking to train Watson to get better at returning answers to specific queries.

 

 

Why The Internet of Things will drive a Knowledge Revolution. — from linkedin.com by David Evans

Excerpt:

As these machines inevitably connect to the Internet, they will ultimately connect to each other so they can share, and collaborate on their own findings. In fact, in 2014 machines got their own ”World Wide Web” called RoboEarth, in which to share knowledge with one another. …
The implications of all of this are at minimum twofold:

  • The way we generate knowledge is going to change dramatically in the coming years.
  • Knowledge is about to increase at an exponential rate.

What we choose to do with this newfound knowledge is of course up to us. We are about to face some significant challenges at scales we have yet to experience.

 

 

Drone squad to be launched by Tokyo police — from bbc.com

Excerpt:

A drone squad, designed to locate and – if necessary – capture nuisance drones flown by members of the public, is to be launched by police in Tokyo.

 

 

An advance in artificial intelligence rivals human abilities — from todayonline.com by John Markoff

Excerpt:

NEW YORK — Computer researchers reported artificial-intelligence advances [on Dec 10] that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

 

 

Somewhat related:

Novo Nordisk, IBM Watson Health to create ‘virtual doctor’ — from wsj.com by Denise Roland
Software could dispense treatment advice for diabetes patients

Excerpt:

Novo Nordisk A/S is teaming up with IBM Watson Health, a division of International Business Machines Corp., to create a “virtual doctor” for diabetes patients that could dispense treatment advice such as insulin dosage.

The Danish diabetes specialist hopes to use IBM’s supercomputer platform, Watson, to analyze health data from diabetes patients to help them manage their disease.

 

 

Why Google’s new quantum computer could launch an artificial intelligence arms race — from washingtonpost.com

 

 

 

8 industries robots will completely transform by 2025 — from techinsider.io

 

 

 

Addendums on 12/17/15:

Russia and China are building highly autonomous killer robots — from businessinsider.com.au by Danielle Muoi

Excerpt:

Russia and China are creating highly autonomous weapons, more commonly referred to as killer robots, and it’s putting pressure on the Pentagon to keep up, according to US Deputy Secretary of Defense Robert Work. During a national-security forum on Monday, Work said that China and Russia are heavily investing in a roboticized army, according to a report from Defense One.

Your Algorithmic Self Meets Super-Intelligent AI — from techcrunch.com by Jarno M. Koponen

Excerpt:

At the same time, your data and personalized experiences are used to develop and train the machine learning systems that are powering the Siris, Watsons, Ms and Cortanas. Be it a speech recognition solution or a recommendation algorithm, your actions and personal data affect how these sophisticated systems learn more about you and the world around you.

The less explicit fact is that your diverse interactions — your likes, photos, locations, tags, videos, comments, route selections, recommendations and ratings — feed learning systems that could someday transform into superintelligent AIs with unpredictable consequences.

As of today, you can’t directly affect how your personal data is used in these systems

 

Addendum on 12/20/15:

 

Addendum on 12/21/15:

  • Facewatch ‘thief recognition’ CCTV on trial in UK stores — from bbc.com
    Excerpts (emphasis DSC):
    Face-recognition camera systems should be used by police, he tells me. “The technology’s here, and we need to think about what is a proportionate response that respects people’s privacy,” he says.

    “The public need to ask themselves: do they want six million cameras painted red at head height looking at them?

 

Addendum on 1/13/16:

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

From DSC:
Some very frustrated reflections after reading:

Excerpt:

Right now, boys are falling out of the kindergarten through 12th grade educational pipeline in ways that we can hardly imagine.

 

This situation continues to remind me of the oil spill in the Gulf (2010), where valuable resources spilled into the water untapped — later causing some serious issues:
.

From DSC:
What are we doing?!!! We’ve watched the dropout rates grow — it doesn’t seem we’ve changed our strategies nearly enough! But the point that gets lost in this is that we will all pay for these broken strategies — and for generations to come!  It’s time to seriously move towards identifying and implementing some new goals.

What should the new goals look like? Here’s my take on at least a portion of a new vision for K-12 — and collegiate — education:

  • Help students identify their God-given gifts and then help them build up their own learning ecosystems to support the development of those gifts. Hook them up with resources that will develop students’ abilities and passions.
    .
  • Part of their learning ecosystems could be to help them enter into — and build up — communities of practice around subjects that they enjoy learning about. Those communities could be local, national, or international. (Also consider the creation of personalized learning agents, as these become more prevalent/powerful.)
    .
  • Do everything we can to make learning enjoyable and foster a love of learning — as we need lifelong learners these days.
    (It doesn’t help society much if students are dropping out of K-12 or if people struggle to make it through graduation — only to then harbor ill feelings towards learning/education in general for years to come.  Let’s greatly reduce the presence/usage of standardized tests — they’re killing us!  They don’t seem to be producing long-term positive results. I congratulate the recent group of teachers who refused to give their students such tests; and I greatly admire them for getting rid of a losing strategy.)

    .
  • Give students more choice, more control over what their learning looks like; let them take their own paths as much as possible (provide different ways to meet the same learning objective is one approach…but perhaps we need to think beyond/bigger than that. The concern/fear arises…but how will we manage this? That’s where a good share of our thinking should be focused; generating creative answers to that question.)
    .
  • Foster curiosity and wonder
    .
  • Provide cross-disciplinary assignments/opportunities
    .
  • Let students work on/try to resolve real issues in their communities
    .
  • Build up students’ appreciation of faith, hope, love, empathy, and a desire to make the world a better place. Provide ways that they can contribute.
    .
  • Let students experiment more — encourage failure.
    .

 

© 2024 | Daniel Christian