An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Preparing for the future of Artificial Intelligence
Executive Office of the President
National Science & Technology Council
Committee on Technology
October 2016

preparingfor-futureai-usgov-oct2016

Excerpt:

As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further action s by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.

The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government. The report was reviewed by the NSTC Committee on Technology, which concurred with its contents. The report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five public workshops co-hosted with universities and other associations that are referenced in this report.

In the coming years, AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
The pace of technological development is moving extremely fast; the ethical, legal, and moral questions are trailing behind it (as is normally the case). But this exponential pace continues to bring some questions, concerns, and thoughts to my mind. For example:

  • What kind of future do we want? 
  • Just because we can, should we?
  • Who is going to be able to weigh in on the future direction of some of these developments?
  • If we follow the trajectories of some of these pathways, where will these trajectories take us? For example, if many people are out of work, how are they going to purchase the products and services that the robots are building?

These and other questions arise when you look at the articles below.

This is the 8th part of a series of postings regarding this matter.
The other postings are in the Ethics section.


 

Robot companions are coming into our homes – so how human should they be? — from theconversation.com

Excerpt:

What would your ideal robot be like? One that can change nappies and tell bedtime stories to your child? Perhaps you’d prefer a butler that can polish silver and mix the perfect cocktail? Or maybe you’d prefer a companion that just happened to be a robot? Certainly, some see robots as a hypothetical future replacement for human carers. But a question roboticists are asking is: how human should these future robot companions be?

A companion robot is one that is capable of providing useful assistance in a socially acceptable manner. This means that a robot companion’s first goal is to assist humans. Robot companions are mainly developed to help people with special needs such as older people, autistic children or the disabled. They usually aim to help in a specific environment: a house, a care home or a hospital.

 

 

 

The Next President Will Decide the Fate of Killer Robots—and the Future of War – from wired.com by Heather Roff and P.W. Singer

Excerpt:

The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.

 

 

Your new manager will be an algorithm — from stevebrownfuturist.com

Excerpt:

It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.

Algorithms for task management
Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of “temporal Tetris” with the parcels and packs them to optimize for space and for the planned delivery route–packages that are delivered first are towards the front, packages for the end of the route are placed at the back.

 

 

Beware of biases in machine learning: One CTO explains why it happens — from enterprisersproject.com by Minda Zetlin

Excerpt:

The Enterprisers Project (TEP): Machines are genderless, have no race, and are in and of themselves free of bias. How does bias creep in?

Sharp: To understand how bias creeps in you first need to understand the difference between programming in the traditional sense and machine learning. With programming in the traditional sense, a programmer analyses a problem and comes up with an algorithm to solve it (basically an explicit sequence of rules and steps). The algorithm is then coded up, and the computer executes the programmer’s defined rules accordingly.

With machine learning, it’s a bit different. Programmers don’t solve a problem directly by analyzing it and coming up with their rules. Instead, they just give the computer access to an extensive real-world dataset related to the problem they want to solve. The computer then figures out how best to solve the problem by itself.

 

 

Technology vs. Humanity – The coming clash between man and machine — from futuristgerd.com by Gerd Leonhard

Excerpt (emphasis DSC):

In his latest book ‘Technology vs. Humanity’, futurist Gerd Leonhard once again breaks new ground by bringing together mankind’s urge to upgrade and automate everything (including human biology itself) with our timeless quest for freedom and happiness.

Before it’s too late, we must stop and ask the big questions: How do we embrace technology without becoming it? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth.

Digital transformation has migrated from the mainframe to the desktop to the laptop to the smartphone, wearables and brain-computer interfaces. Before it moves to the implant and the ingestible insert, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange.

 

 

Ethics: Taming our technologies
The Ethics of Invention: Technology and the Human Future — from nature.com by Sheila Jasanoff

Excerpt:

Technological innovation in fields from genetic engineering to cyberwarfare is accelerating at a breakneck pace, but ethical deliberation over its implications has lagged behind. Thus argues Sheila Jasanoff — who works at the nexus of science, law and policy — in The Ethics of Invention, her fresh investigation. Not only are our deliberative institutions inadequate to the task of oversight, she contends, but we fail to recognize the full ethical dimensions of technology policy. She prescribes a fundamental reboot.

Ethics in innovation has been given short shrift, Jasanoff says, owing in part to technological determinism, a semi-conscious belief that innovation is intrinsically good and that the frontiers of technology should be pushed as far as possible. This view has been bolstered by the fact that many technological advances have yielded financial profit in the short term, even if, like the ozone-depleting chlorofluorocarbons once used as refrigerants, they have proved problematic or ruinous in the longer term.

 

 

 

Robotics is coming faster than you think — from forbes.com by Kevin O’Marah

Excerpt:

This week, The Wall Street Journal featured a well-researched article on China’s push to shift its factory culture away from labor and toward robots. Reasons include a rise in labor costs, the flattening and impending decrease in worker population and falling costs of advanced robotics technology.

Left unsaid was whether this is part of a wider acceleration in the digital takeover of work worldwide. It is.

 

 

Adidas will open an automated, robot-staffed factory next year — from businessinsider.com

 

 

 

Beyond Siri, the next-generation AI assistants are smarter specialists — from fastcompany.com by Jared Newman
SRI wants to produce chatbots with deep knowledge of specific topics like banking and auto repair.

 

 

 

Machine learning
Of prediction and policy — from economist.com
Governments have much to gain from applying algorithms to public policy, but controversies loom

Excerpt:

FOR frazzled teachers struggling to decide what to watch on an evening off (DC insert: a rare event indeed), help is at hand. An online streaming service’s software predicts what they might enjoy, based on the past choices of similar people. When those same teachers try to work out which children are most at risk of dropping out of school, they get no such aid. But, as Sendhil Mullainathan of Harvard University notes, these types of problem are alike. They require predictions based, implicitly or explicitly, on lots of data. Many areas of policy, he suggests, could do with a dose of machine learning.

Machine-learning systems excel at prediction. A common approach is to train a system by showing it a vast quantity of data on, say, students and their achievements. The software chews through the examples and learns which characteristics are most helpful in predicting whether a student will drop out. Once trained, it can study a different group and accurately pick those at risk. By helping to allocate scarce public funds more accurately, machine learning could save governments significant sums. According to Stephen Goldsmith, a professor at Harvard and a former mayor of Indianapolis, it could also transform almost every sector of public policy.

But the case for code is not always clear-cut. Many American judges are given “risk assessments”, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites. (Northpointe, the algorithm provider, disputes the finding.)

 

 

‘Software is eating the world’: How robots, drones and artificial intelligence will change everything — from business.financialpost.com

 

 

Thermostats can now get infected with ransomware, because 2016 — from thenextweb.com by Matthew Hughes

 

 

Who will own the robots? — from technologyreview.com by David Rotman
We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates

 

 

Police Drones Multiply Across the Globe — from dronelife.com by Jason Reagan

 

 

 

LinkedIn lawsuit may signal a losing battle against ‘botnets’, say experts — from bizjournals.com by Annie Gaus

 

 

 

China’s Factories Count on Robots as Workforce Shrinks — from wsj.com by Robbie Whelan and Esther Fung
Rising wages, cultural changes push automation drive; demand for 150,000 robots projected for 2018

 

 

 

viv-ai-june2016

 

viv-ai-2-june2016

 

 

Researchers Are Growing Living Biohybrid Robots That Move Like Animals — from slate.com by Victoria Webster

 

 

 

Addendums on 9/14/16:

 

 

FutureProofYourself-MS-FutureLab-Aug2016

 

Future proof yourselves — from Microsoft & The Future Laboratory

Excerpt (emphasis DSC):

Executive Summary
Explore the world of work in 2025 in a revealing evidence-based report by future consultants The Future Laboratory and Microsoft, which identifies and investigates ten exciting, inspiring and astounding jobs for the graduates of tomorrow – but that don’t exist yet.

Introduction
Tomorrow’s university graduates will be taking a journey into the professional unknown guided by a single, mind-blowing statistic: 65% of today’s students will be doing jobs that don’t even exist yet.

Technological change, economic turbulence and societal transformation are disrupting old career certainties and it is increasingly difficult to judge which degrees and qualifications will be a passport to a well-paid and fulfilling job in the decades ahead.

A new wave of automation, with the advent of true artificial intelligence, robots and driverless cars, threatens the future of traditional jobs, from truck drivers to lawyers and bankers.

But, by 2025, this same technological revolution will open up inspiring and exciting new career opportunities in sectors that are only in their infancy today.

The trick for graduates is to start to develop the necessary skills today in order to ensure they future proof their careers.

This report by future consultants The Future Laboratory attempts to show them how to do just that in a research collaboration with Microsoft, whose Surface technology deploys the precision and versatility of pen and touch to power creative industries ranging from graphic design and photography to architecture and engineering.

In this study, we use extensive desk research and in-depth interviews with technologists, academics, industry commentators and analysts to unveil 10 new creative job categories that will be recruiting tomorrow’s university students.

These future jobs demonstrate a whole new world of potential applications for the technology of today, as we design astonishing virtual habitats and cure deadly diseases from the comfort of our own sofas. It is a world that will need a new approach to training and career planning.

Welcome to tomorrow’s jobs…

 

 

65% of today’s students will be doing jobs that don’t even exist yet.

 

 

One of the jobs mentioned was the Ethical Technology Advocate — check out this video clip:

Ethical-Technology-Advocate-MS-Aug2016-

 

“Over the next decade, the long-awaited era of robots will dawn and become part of everyday life. It will be important to set out the moral and ethical rules under which they operate…”

 

 

 

 

IBM made a ‘crash course’ for the White House, and it’ll teach you all the AI basics — from futurism.com by Ramon Perez

Summary:

With the current AI revolution, comes a flock of skeptics. Alarmed of what AI could be in the near future, the White House released a Notice of Request For Information (RFI) on it. In response, IBM has created what seems to be an AI 101, giving a good sense of the current state, future, and risks of AI.

 

 

Also see:

 

FedGovt-Request4Info-June2016

 

 

 

Gartner reveals the top 3 emerging technologies from 2016 — from information-age.com by Nicholas Ismail
Technology is advancing at such a rapid rate that businesses are almost being forced to embrace emerging technologies in order to stay competitive

Excerpt:

Emerging technologies are fast becoming the tools with the highest priority for organisations facing rapidly accelerating digital business innovation.

Gartner’s Hype Cycle for Emerging Technologies, 2016 has selected three distinct technology trends – out of 2,000 – that organisations should track and begin to implement in order to stay competitive.

Their selection was based on what technologies will have the most impact and lead to the most competitive advantage, while establishing when these big technologies are going to mature (early stage or saturating).

Gartner’s research director Mike Walker said the hype cycle specifically focuses on the set of technologies that are showing promise in delivering a high degree of competitive advantage over the next five to ten years.

Information Age spoke to Mike Walker to gain a further insight into these three technologies, and their future business applications.

 

 

Smart machine technologies will be the most disruptive class of technologies over the next 10 years, including smart robots, autonomous cars and smart workspaces

 

 

 


From DSC:
The articles below demonstrate why the need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!



 

Ethics-Robots-NYTimes-July2016

What Ethics Should Guide the Use of Robots in Policing? — from nytimes.com

 

 

11 Police Robots Patrolling Around the World — from wired.com

 

 

Police use of robot to kill Dallas shooting suspect is new, but not without precursors — from techcrunch.com

 

 

What skills will human workers need when robots take over? A new algorithm would let the machines decide — from qz.com

 

 

The impact on jobs | Automation and anxiety | Will smarter machines cause mass unemployment? — from economist.com

 

 

 

 

VRTO Spearheads Code of Ethics on Human Augmentation — from vrfocus.com
A code of ethics is being developed for both VR and AR industries.

 

 

 

Google and Microsoft Want Every Company to Scrutinize You with AI — from technologyreview.com by Tom Simonite
The tech giants are eager to rent out their AI breakthroughs to other companies.

 

 

U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities — from pewinternet.org by Cary Funk, Brian Kennedy and Elizabeth Podrebarac Sciupac
Americans are more worried than enthusiastic about using gene editing, brain chip implants and synthetic blood to change human capabilities

 

 

Human Enhancement — from pewinternet.org by David Masci
The Scientific and Ethical Dimensions of Striving for Perfection

 

 

Robolliance focuses on autonomous robotics for security and survelliance — from robohub.org by Kassie Perlongo

 

 

Company Unveils Plans to Grow War Drones from Chemicals — from interestingengineering.com

 

 

The Army’s Self-Driving Trucks Hit the Highway to Prepare for Battle — from wired.com

 

 

Russian robots will soon replace human soldiers — from interestingengineering.com

 

 

Unmanned combat robots beginning to appear — from therobotreport.com

 

 

Law-abiding robots? What should the legal status of robots be? — from robohub.org by Anders Sandberg

Excerpt:

News media are reporting that the EU is considering turning robots into electronic persons with rights and apparently industry spokespeople are concerned that Brussels’ overzealousness could hinder innovation.

The report is far more sedate. It is a draft report, not a bill, with a mixed bag of recommendations to the Commission on Civil Law Rules on Robotics in the European Parliament. It will be years before anything is decided.

Nevertheless, it is interesting reading when considering how society should adapt to increasingly capable autonomous machines: what should the legal and moral status of robots be? How do we distribute responsibility?

A remarkable opening
The report begins its general principles with an eyebrow-raising paragraph:

whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;

It is remarkable because first it alludes to self-aware robots, presumably moral agents – a pretty extreme and currently distant possibility – then brings up Isaac Asimov’s famous but fictional laws of robotics and makes a simultaneously insightful and wrong-headed claim.

 

 

Robots are getting a sense of self-doubt — from popsci.com by Dave Gershgorn
Introspection is the key to growth

Excerpt:

That murmur is self-doubt, and its presence helps keep us alive. But robots don’t have this instinct—just look at the DARPA Robotics Challenge. But for robots and drones to exist in the real world, they need to realize their limits. We can’t have a robot flailing around in the darkness, or trying to bust through walls. In a new paper, researchers at Carnegie Mellon are working on giving robots introspection, or a sense of self-doubt. By predicting the likelihood of their own failure through artificial intelligence, robots could become a lot more thoughtful, and safer as well.

 

 

Scientists Create Successful Biohybrid Being Using 3-D Printing and Genetic Engineering — from inc.com by Lisa Calhoun
Scientists genetically engineered and 3-D-printed a biohybrid being, opening the door further for lifelike robots and artificial intelligence

Excerpt:

If you met this lab-created critter over your beach vacation, you’d swear you saw a baby ray. In fact, the tiny, flexible swimmer is the product of a team of diverse scientists. They have built the most successful artificial animal yet. This disruptive technology opens the door much wider for lifelike robots and artificial intelligence.

From DSC:
I don’t think I’d use the term disruptive here — though that may turn out to be the case.  The word disruptive doesn’t come close to carrying/relaying the weight and seriousness of this kind of activity; nor does it point out where this kind of thing could lead to.

 

 

Pokemon Go’s digital popularity is also warping real life — from finance.yahoo.com by Ryan Nakashima and David Hamilton

Excerpt (emphasis DSC):

Todd Richmond, a director at the Institute for Creative Technologies at the University of Southern California, says a big debate is brewing over who controls digital assets associated with real world property.

“This is the problem with technology adoption — we don’t have time to slowly dip our toe in the water,” he says. “Tenants have had no say, no input, and now they’re part of it.”

 

From DSC:
I greatly appreciate what Pokémon Go has been able to achieve and although I haven’t played it, I think it’s great (great for AR, great for peoples’ health, great for the future of play, etc.)!   So there are many positives to it. But the highlighted portion above is not something we want to have to say occurred with artificial intelligence, cognitive computing, some types of genetic engineering, corporations tracking/using your personal medical information or data, the development of biased algorithms, etc.  

 

 

Right now, artificial intelligence is the only thing that matters: Look around you — from forbes.com by Enrique Dans

Excerpts:

If there’s one thing the world’s most valuable companies agree on, it’s that their future success hinges on artificial intelligence.

In short, CEO Sundar Pichai wants to put artificial intelligence everywhere, and Google is marshaling its army of programmers into the task of remaking itself as a machine learning company from top to bottom.

Microsoft won’t be left behind this time. In a great interview a few days ago, its CEO, Satya Nadella says he intends to overtake Google in the machine learning race, arguing that the company’s future depends on it, and outlining a vision in which human and machine intelligence work together to solve humanity’s problems. In other words, real value is created when robots work for people, not when they replace them.

And Facebook? The vision of its founder, Mark Zuckerberg, of the company’s future, is one in which artificial intelligence is all around us, carrying out or helping to carry out just about any task you can think of…

 

The links I have included in this column have been carefully chosen as recommended reading to support my firm conviction that machine learning and artificial intelligence are the keys to just about every aspect of life in the very near future: every sector, every business.

 

 

 

10 jobs that A.I. and chatbots are poised to eventually replace — from venturebeat.com by Felicia Schneiderhan

Excerpt:

If you’re a web designer, you’ve been warned.

Now there is an A.I. that can do your job. Customers can direct exactly how their new website should look. Fancy something more colorful? You got it. Less quirky and more professional? Done. This A.I. is still in a limited beta but it is coming. It’s called The Grid and it came out of nowhere. It makes you feel like you are interacting with a human counterpart. And it works.

Artificial intelligence has arrived. Time to sharpen up those resumes.

 

 

Augmented Humans: Next Great Frontier, or Battleground? — from nextgov.com by John Breeden

Excerpt:

It seems like, in general, technology always races ahead of the moral implications of using it. This seems to be true of everything from atomic power to sequencing genomes. Scientists often create something because they can, because there is a perceived need for it, or even by accident as a result of research. Only then does the public catch up and start to form an opinion on the issue.

Which brings us to the science of augmenting humans with technology, a process that has so far escaped the public scrutiny and opposition found with other radical sciences. Scientists are not taking any chances, with several yearly conferences already in place as a forum for scientists, futurists and others to discuss the process of human augmentation and the moral implications of the new science.

That said, it seems like those who would normally oppose something like this have remained largely silent.

 

 

Google Created Its Own Laws of Robotics — from fastcodesign.com by John Brownlee
Building robots that don’t harm humans is an incredibly complex challenge. Here are the rules guiding design at Google.

 

 

Google identifies five problems with artificial intelligence safety — from which-50.com

 

 

DARPA is giving $2 million to the person who creates an AI hacker — from futurism.com

 

 

 

rollsroyce-july2016

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part VI in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III, Part IV, Part V, and to ask:

  • How do we collectively start talking about the future that we want?
  • How do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved in these discussions? Shouldn’t each one of us participate in some way, shape, or form?

 

 

AIsWhiteGuyProblem-NYTimes-June2016

 

Artificial Intelligence’s White Guy Problem — from nytimes.com by Kate Crawford

Excerpt:

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

 

 

Facebook is using artificial intelligence to categorize everything you write — from futurism.com

Excerpt:

Facebook has just revealed DeepText, a deep learning AI that will analyze everything you post or type and bring you closer to relevant content or Facebook services.

 

 

March of the machines — from economist.com
What history tells us about the future of artificial intelligence—and how society should respond

Excerpt:

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

 

As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.

 

 

Backlash-Data-DefendantsFutures-June2016

 

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures — from nytimes.com by Mitch Smith

Excerpt:

CHICAGO — When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

 

 

Google Tackles Challenge of How to Build an Honest Robot — from bloomberg.com by

Excerpt:

Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.

The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.

But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.

 

 

Policy paper | Data Science Ethical Framework — from gov.uk
From: Cabinet Office, Government Digital Service and The Rt Hon Matt Hancock MP
First published: 19 May 2016
Part of: Government transparency and accountability

This framework is intended to give civil servants guidance on conducting data science projects, and the confidence to innovate with data.

Detail: Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking. We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with. The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog.

 

 

 

WhatsNextForAI-June2016

Excerpt (emphasis DSC):

We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.

The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values. If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws.

 

 

DARPA to Build “Virtual Data Scientist” Assistants Through A.I. — from inverse.com by William Hoffman
A.I. will make up for the lack of data scientists.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.

This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.

 

 

Robot that chooses to inflict pain sparks debate about AI systems — from interestingengineering.com by Maverick Baker

Excerpt:

A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.

The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.

 

 

The NSA wants to spy on the Internet of Things. Everything from thermostats to pacemakers could be mined for intelligence data. — from engadget.com by Andrew Dalton

Excerpt:

We already know the National Security Agency is all up in our data, but the agency is reportedly looking into how it can gather even more foreign intelligence information from internet-connected devices ranging from thermostats to pacemakers. Speaking at a military technology conference in Washington D.C. on Friday, NSA deputy director Richard Ledgett said the agency is “looking at it sort of theoretically from a research point of view right now.” The Intercept reports Ledgett was quick to point out that there are easier ways to keep track of terrorists and spies than to tap into any medical devices they might have, but did confirm that it was an area of interest.

 

 

The latest tool in the NSA’s toolbox? The Internet of Things — from digitaltrends.com by Lulu Chang

Excerpt:

You may love being able to set your thermostat from your car miles before you reach your house, but be warned — the NSA probably loves it too. On Friday, the National Security Agency — you know, the federal organization known for wiretapping and listening it on U.S. citizens’ conversations — told an audience at Washington’s Newseum that it’s looking into using the Internet of Things and other connected devices to keep tabs on individuals.

 


Addendum on 6/29/16:

 

Addendums on 6/30/16

 

Addendum on 7/1/16

  • Humans are willing to trust chatbots with some of their most sensitive information — from businessinsider.com by Sam Shead
    Excerpt:
    A study has found that people are inclined to trust chatbots with sensitive information and that they are open to receiving advice from these AI services. The “Humanity in the Machine” report —published by media agency Mindshare UK on Thursday — urges brands to engage with customers through chatbots, which can be defined as artificial intelligence programmes that conduct conversations with humans through chat interfaces.

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

IBM Watson takes on cybercrime with new cloud-based cybersecurity technology — from techrepublic.com by Conner Forrest
Eight universities have begun a year-long initiative to train IBM Watson for work in cybersecurity. Will the Jeopardy champ soon police the internet?

IBM-Watson-Cbersecurity-May2016

Excerpt:

On Tuesday, IBM announced that Watson, its cognitive computing system (and former Jeopardy champion), will be spending the next year training for a new job—fighting cybercrime.

Watson for Cyber Security is a cloud-based version of IBM’s cognitive computing tools that will be the result of a one-year-long research project that is starting in the fall. Students and faculty from eight universities will participate in the research and train Watson to better understand how to detect potential threats.

 

 

Addendum on 5/12/16:

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish


From DSC:
This posting represents Part IV in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

 

The biggest mystery in AI right now is the ethics board that Google set up after buying DeepMind — from businessinsider.com by Sam Shead

Excerpt (emphasis DSC):

Google’s artificial intelligence (AI) ethics board, established when Google acquired London AI startup DeepMind in 2014, remains one of the biggest mysteries in tech, with both Google and DeepMind refusing to reveal who sits on it.

Google set up the board at DeepMind’s request after the cofounders of the £400 million research-intensive AI lab said they would only agree to the acquisition if Google promised to look into the ethics of the technology it was buying into.

A number of AI experts told Business Insider that it’s important to have an open debate about the ethics of AI given the potential impact it’s going to have on all of our lives.

 

 

 

Algorithms may save us from information overload, but are they the curators we want? — from newstatesman.com by Barbara Speed
Instagram is joining the legions of social networks which use algorithms to dictate what we see, and when we see it.

Excerpt:

We’ve entered the age of the algorithm.

In a way, it was inevitable: thanks to the rise of smartphones and social media, we’re surrounded by vast, unfiltered streams of information, dripped to us via “feeds” on sites like Facebook and Twitter. As a result, we needed something to tame all that information, because an unfiltered stream is about as useful as no information at all. So we turned to a type of algorithm which could help separate the signal from the noise: basically, a set of steps which would calculate which information should be prioritised, and which should be hidden.

It’s impossible to say that algorithms are “good” or “bad”, just as humanity isn’t overridingly either. Algorithms are designed by humans, and therefore carry forward whatever prejudice or bias they’re programmed to perform.

 

 

 

Internet of Things to be used as spy tool by governments: US intel chief  — from arstechnica.com by David Kravets
Clapper says spy agencies “might” use IoT for surveillance, location tracking.

Excerpt:

James Clapper, the US director of national intelligence, told lawmakers Tuesday that governments across the globe are likely to employ the Internet of Things as a spy tool, which will add to global instability already being caused by infectious disease, hunger, climate change, and artificial intelligence.

Clapper addressed two different committees on Tuesday—the Senate Armed Services Committee and the Senate Select Committee on Intelligence Committee—and for the first time suggested that the Internet of Things could be weaponized by governments. He did not name any countries or agencies in regard to the IoT, but a recent Harvard study suggested US authorities could harvest the IoT for spying purposes.

 

 

 

“GoPro” Anthropology — paying THEM to learn from US? — from Jason Ohler’s Big Ideas Series

Excerpt (emphasis DSC):

What’s the big idea?
Consumer research and individual learning assessment techniques will merge, using wearable technology that observes and records life from the wearer’s point of view. The recording technology will be invisible to the consumer and student, as well as to the public. Video feeds will be beamed to analysts, real time. Recordings will be analyzed and extrapolated by powerful big data driven analytics. For both consumers and students, research will be conducted for the same purpose: to provide highly individualized approaches to learning and sales. Mass customized learning and consumerism will take a huge step forward. So will being embedded in the surveillance culture.

Why would we submit to this? Because we are paid to? Perhaps.  But we may well pay them to watch us, to tell us about ourselves, to help us and our children learn better and faster in a high stakes testing culture, and to help us make smarter choices as consumers. Call it “keeping up with data-enhanced neighbors.” Numerous issues of privacy and security will be weighed against personal opportunity, as learners, consumers and citizens.

 

 

 

10 promising technologies assisting the future of medicine and healthcare — by Bertalan Meskó, MD, PhD

Excerpt (emphasis DSC):

Technology will not solve the problems that healthcare faces globally today. And the human touch alone is not enough any more, therefore a new balance is needed between using disruptive innovations but still keeping the human interaction between patients and caregivers. Here are 10 technologies and trends that could enable this.

I see enormous technological changes heading our way. If they hit us unprepared, which we are now, they will wash away the medical system we know and leave it a purely technology–based service without personal interaction. Such a complicated system should not be washed away. Rather, it should be consciously and purposefully redesigned piece by piece. If we are unprepared for the future, then we lose this opportunity. I think we are still in time and it is still possible.

The advances of technology do not have to mean the end of the human touch. Instead, the beginning of a new era when both are crucial.

 

 

 

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1 — from rollingstone.com by Jeff Goodell
We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 2 — from rollingstone.com by Jeff Goodell
Self-driving cars, war outsourced to robots, surgery by autonomous machines – this is only the beginning

 

 

Laser weapons ready for use today, Lockheed executives say — from defensenews.com by Aaron Mehta
The time has finally come where those weapons are capable of being fielded, according to a trio of Lockheed Martin executives who work on the development of the company’s laser arsenal.

 

 

 

Delivery Robot – Fresh Pizza With DRU From Domino. — from wtvox.com

From DSC:
How many jobs will be displaced here? How many college students — amongst many others — are going to be impacted here, as they try to make their way through (paying for) college? But don’t assume that it’s just lower level jobs that will be done away with…for example, see the next entry re: the legal profession.

 

 

New Report Predicts Over 100,000 Legal Jobs Will Be Lost To Automation — from futurism.com by
An extensive new analysis by Deloitte estimates that over 100,000 jobs will be lost to technological automation within the next two decades. Increasing technological advances have helped replace menial roles in the office and do repetitive tasks.

Excerpt:

A new analysis from Deloitte Insight states that within the next two decades, an estimated 114,000 jobs in the legal sector will have a high chance of having been replaced with automated machines and algorithms. The report predicts “profound reforms” across the legal profession with the 114,000 jobs representing over 39% of jobs in the legal sector.

These radical changes are spurred by the rapid pace of technological progress and the need to offer clients more value for their money. Automation and the increasing rise of millennials in the legal workplace also alter the nature of talent needed by law firms in the future.

 

 

 

Raffaello D’Andrea: Meet the dazzling flying machines of the future — from ted.com

Description:

When you hear the word “drone,” you probably think of something either very useful or very scary. But could they have aesthetic value? Autonomous systems expert Raffaello D’Andrea develops flying machines, and his latest projects are pushing the boundaries of autonomous flight — from a flying wing that can hover and recover from disturbance to an eight-propeller craft that’s ambivalent to orientation … to a swarm of tiny coordinated micro-quadcopters.

 

 

 

Addendum on 4/4/16:

The Scarlett Johansson Bot is the robotic future of objectifying women — from wired.com by April Glaser (From DSC: I’m not advocating this objectification of woman *at all*; rather I post this addendum  here because this is the kind of thing that we need to be aware of and talking about, or the future won’t be a dream…it will be a nightmare)

Excerpt:

The question, however, is one of precedent. If a man can’t earn the attention of the woman he longs for, is it plausible for that man to build a robot that looks exactly like his love interest instead? Is there any legal recourse to prevent someone from building a ScarJo bot, or Beyonce bot, or a bot of you? Sure, people make doll and wax replicas of famous people all the time. But the difference here is that Mark 1 moves, smiles, and winks.

 

 

How top liberal arts colleges prepare students for successful lives of leadership and service — from educationdive.com by John I. Williams, Jr.

Excerpt (emphasis DSC):

This year’s World Economic Forum (WEF) in Davos, Switzerland, discussed the top ten skills that will be needed for careers in 2020:

  1. Complex problem solving
  2. Critical thinking
  3. Creativity
  4. People management
  5. Coordinating with others
  6. Emotional intelligence
  7. Judgment and decision making
  8. Service orientation
  9. Negotiation
  10. Cognitive flexibility

The list is remarkable, both for what it includes and for what it doesn’t; and for the fact that it is as timeless as it is forward-looking. For our purposes, it serves as a useful gauge for the value of the education our students receive at highly-selective liberal arts colleges.

As I reflect upon the list, I realize graduates of top liberal arts colleges will smile as they read it, reminded that their education focuses on skills that will be valuable across a lifetime.

 

Going forward, college graduates may work for nine or more organizations over the course of their careers.

 

Yet, for all this techno-wizardry, the critical skills on WEF’s list for careers in 2020 resemble closely those that have defined the leaders who have emerged from top liberal arts colleges for decades. 

 

At the same time, top liberal arts colleges have always been committed to preparing students for more than just career success, including contributions to society more broadly. These colleges have always focused not only on the development of students’ intellect but on their character as well.

 

 

 
© 2024 | Daniel Christian