The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

Code-Dependent: Pros and Cons of the Algorithm Age — from pewinternet.org by Lee Rainie and Janna Anderson
Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms. Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Algorithms are often elegant and incredibly useful tools used to accomplish tasks. They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences. Recent news items tie to these concerns…

 

The use of algorithms is spreading as massive amounts of data are being created, captured and analyzed by businesses and governments. Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace.

 

 

 

 

 

 

 

A world without work — by Derek Thompson; The Atlantic — from July 2015

Excerpts:

Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.

For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War  II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.

Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.

“Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed”…

“The cultural breakdown matters even more than the economic breakdown.”

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen.

What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both”…

Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury.

Most people do need to achieve things through, yes, work to feel a lasting sense of purpose.

When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit.

What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.

“I can’t stress this enough: this isn’t just about economics; it’s psychological”…

 

 

The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.

 

 

From DSC:
Though I’m not saying Thompson is necessarily asserting this in his article, I don’t see a world without work as a dream. In fact, as the quote immediately before this paragraph alludes to, I think that most people would not like a life that is devoid of all work. I think work is where we can serve others, find purpose and meaning for our lives, seek to be instruments of making the world a better place, and attempt to design/create something that’s excellent.  We may miss the mark often (I know I do), but we keep trying.

 

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

Artificial Intelligence Ethics, Jobs & Trust – UK Government Sets Out AI future — from cbronline.com by Ellie Burns

Excerpt:

UK government is driving the artificial intelligence agenda, pinpointing it as a future technology driving the fourth revolution and billing its importance on par with the steam engine.

The report on Artificial Intelligence by the Government Office for Science follows the recent House of Commons Committee report on Robotics and AI, setting out the opportunities and implications for the future of decision making. In a report which spans government deployment, ethics and the labour market, Digital Minister Matt Hancock provided a foreword which pushed AI as a technology which would benefit the economy and UK citizens.

 

 

 

 

MIT’s “Moral Machine” Lets You Decide Who Lives & Dies in Self-Driving Car Crashes — from futurism.com

In brief:

  • MIT’S 13-point exercise lets users weigh the life-and-death decisions that self-driving cars could face in the future.
  • Projects like the “Moral Machine” give engineers insight into how they should code complex decision-making capabilities into AI.

 

 

Wearable Tech Weaves Its Way Into Learning — from edsurge.com by Marguerite McNeal

Excerpt:

“Ethics often falls behind the technology,” says Voithofer of Ohio State. Personal data becomes more abstract when it’s combined with other datasets or reused for multiple purposes, he adds. Say a device collects and anonymizes data about a student’s emotional patterns. Later on that information might be combined with information about her test scores and could be reassociated with her. Some students might object to colleges making judgments about their academic performance from indirect measurements of their emotional states.

 

 

New era of ‘cut and paste’ humans close as man injected with genetically-edited blood – from telegraph.co.uk by Sarah Knapton

Excerpt:

A world where DNA can be rewritten to fix deadly diseases has moved a step closer after scientists announced they had genetically-edited the cells of a human for the first time using a groundbreaking technique.

A man in China was injected with modified immune cells which had been engineered to fight his lung cancer. Larger trials are scheduled to take place next year in the US and Beijing, which scientists say could open up a new era of genetic medicine.

The technique used is called Crispr, which works like tiny molecular scissors snipping away genetic code and replacing it with new instructions to build better cells.

 

 

 

Troubling Study Says Artificial Intelligence Can Predict Who Will Be Criminals Based on Facial Features — from theintercept.com by Sam Biddle

 

 

 

Artificial intelligence is quickly becoming as biased as we are — from thenextweb.com by Bryan Clark

 

 

 

A bug in the matrix: virtual reality will change our lives. But will it also harm us? — from theguardian.stfi.re
Prejudice, harassment and hate speech have crept from the real world into the digital realm. For virtual reality to succeed, it will have to tackle this from the start

Excerpt:

Can you be sexually assaulted in virtual reality? And can anything be done to prevent it? Those are a few of the most pressing ethical questions technologists, investors and we the public will face as VR grows.

 

 

 

Light Bulbs Flash “SOS” in Scary Internet of Things Attack — from fortune.com by Jeff John Roberts

 

 

 

How Big Data Transformed Applying to College — from slate.com by Cathy O’Neil
It’s made it tougher, crueler, and ever more expensive.

 

 

Not OK, Google — from techcrunch.com by Natasha Lomas

Excerpts (emphasis DSC):

The scope of Alphabet’s ambition for the Google brand is clear: It wants Google’s information organizing brain to be embedded right at the domestic center — i.e. where it’s all but impossible for consumers not to feed it with a steady stream of highly personal data. (Sure, there’s a mute button on the Google Home, but the fact you have to push a button to shut off the ear speaks volumes… )

In other words, your daily business is Google’s business.

“We’re moving from a mobile-first world to an AI-first world,” said CEO Sundar Pichai…

But what’s really not OK, Google is the seismic privacy trade-offs involved here. And the way in which Alphabet works to skate over the surface of these concerns.

 

What he does not say is far more interesting, i.e. that in order to offer its promise of “custom convenience” — with predictions about restaurants you might like to eat at, say, or suggestions for how bad the traffic might be on your commute to work — it is continuously harvesting and data-mining your personal information, preferences, predilections, peccadilloes, prejudices…  and so on and on and on. AI never stops needing data. Not where fickle humans are concerned. 

 

 

Welcome to a world without work — from by Automation and globalisation are combining to generate a world with a surfeit of labour and too little work

Excerpt:

A new age is dawning. Whether it is a wonderful one or a terrible one remains to be seen. Look around and the signs of dizzying technological progress are difficult to miss. Driverless cars and drones, not long ago the stuff of science fiction, are now oddities that can occasionally be spotted in the wild and which will soon be a commonplace in cities around the world.

 

From DSC:
I don’t see a world without work being good for us in the least. I think we humans need to feel that we are contributing to something. We need a purpose for living out our days here on Earth (even though they are but a vapor).  We need vision…goals to works towards as we seek to use the gifts, abilities, passions, and interests that the LORD gave to us.  The author of the above article would also add that work:

  • Is a source of personal identity
  • It helps give structure to our days and our lives
  • It offers the possibility of personal fulfillment that comes from being of use to others
  • Is a critical part of the glue that holds society together and smooths its operation

 

Over the last generation, work has become ever less effective at performing these roles. That, in turn, has placed pressure on government services and budgets, contributing to a more poisonous and less generous politics. Meanwhile, the march of technological progress continues, adding to the strain.

 

 

10 breakthrough technologies for 2016 — from technologyreview.com

Excerpts:

Immune Engineering
Genetically engineered immune cells are saving the lives of cancer patients. That may be just the start.

Precise Gene Editing in Plants
CRISPR offers an easy, exact way to alter genes to create traits such as disease resistance and drought tolerance.

Conversational Interfaces
Powerful speech technology from China’s leading Internet company makes it much easier to use a smartphone.

Reusable Rockets
Rockets typically are destroyed on their maiden voyage. But now they can make an upright landing and be refueled for another trip, setting the stage for a new era in spaceflight.

Robots That Teach Each Other
What if robots could figure out more things on their own and share that knowledge among themselves?

DNA App Store
An online store for information about your genes will make it cheap and easy to learn more about your health risks and predispositions.

SolarCity’s Gigafactory
A $750 million solar facility in Buffalo will produce a gigawatt of high-efficiency solar panels per year and make the technology far more attractive to homeowners.

Slack
A service built for the era of mobile phones and short text messages is changing the workplace.

Tesla Autopilot
The electric-vehicle maker sent its cars a software update that suddenly made autonomous driving a reality.

Power from the Air
Internet devices powered by Wi-Fi and other telecommunications signals will make small computers and sensors more pervasive

 

 

The 4 big ethical questions of the Fourth Industrial Revolution — from 3tags.org by the World Economic Forum

Excerpts:

We live in an age of transformative scientific powers, capable of changing the very nature of the human species and radically remaking the planet itself.

Advances in information technologies and artificial intelligence are combining with advances in the biological sciences; including genetics, reproductive technologies, neuroscience, synthetic biology; as well as advances in the physical sciences to create breathtaking synergies — now recognized as the Fourth Industrial Revolution.

Since these technologies will ultimately decide so much of our future, it is deeply irresponsible not to consider together whether and how to deploy them. Thankfully there is growing global recognition of the need for governance.

 

 

Scientists create live animals from artificial eggs in ‘remarkable’ breakthrough — from telegraph.co.uk by Sarah Knapton

 

 

 

Robot babies from Japan raise questions about how parents bond with AI — from singularityhub.com by Mark Robert Anderson

Excerpt:

This then leads to the ethical implications of using robots. Embracing a number of areas of research, robot ethics considers whether the use of a device within a particular field is acceptable and also whether the device itself is behaving ethically. When it comes to robot babies there are already a number of issues that are apparent. Should “parents” be allowed to choose the features of their robot, for example? How might parents be counseled when returning their robot baby? And will that baby be used again in the same form?

 

 

 

 

 

 

 

 

Amazon’s Vision of the Future Involves Cops Commanding Tiny Drone ‘Assistants’ — from gizmodo.com by Hudson Hongo

 

 

 

DARPA’s Autonomous Ship Is Patrolling the Seas with a Parasailing Radar — from technologyreview.com by Jamie Condliffe
Forget self-driving cars—this is the robotic technology that the military wants to use.

 

 

 

China’s policing robot: Cattle prod meets supercomputer — from computerworld.com by Patrick Thibodeau
China’s fastest supercomputers have some clear goals, namely development of its artificial intelligence, robotics industries and military capability, says the U.S.

 

 

Report examines China’s expansion into unmanned industrial, service, and military robotics systems

 

 

 

Augmented Reality Glasses Are Coming To The Battlefield — from popsci.com by Andrew Rosenblum
Marines will control a head-up display with a gun-mounted mouse

 

 

———-

Addendum on 12/2/16:

Regulation of the Internet of Things — from schneier.com by Bruce Schneier

Excerpt:

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

 

 

 

An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Preparing for the future of Artificial Intelligence
Executive Office of the President
National Science & Technology Council
Committee on Technology
October 2016

preparingfor-futureai-usgov-oct2016

Excerpt:

As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further action s by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.

The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government. The report was reviewed by the NSTC Committee on Technology, which concurred with its contents. The report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five public workshops co-hosted with universities and other associations that are referenced in this report.

In the coming years, AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

From DSC:
The pace of technological development is moving extremely fast; the ethical, legal, and moral questions are trailing behind it (as is normally the case). But this exponential pace continues to bring some questions, concerns, and thoughts to my mind. For example:

  • What kind of future do we want? 
  • Just because we can, should we?
  • Who is going to be able to weigh in on the future direction of some of these developments?
  • If we follow the trajectories of some of these pathways, where will these trajectories take us? For example, if many people are out of work, how are they going to purchase the products and services that the robots are building?

These and other questions arise when you look at the articles below.

This is the 8th part of a series of postings regarding this matter.
The other postings are in the Ethics section.


 

Robot companions are coming into our homes – so how human should they be? — from theconversation.com

Excerpt:

What would your ideal robot be like? One that can change nappies and tell bedtime stories to your child? Perhaps you’d prefer a butler that can polish silver and mix the perfect cocktail? Or maybe you’d prefer a companion that just happened to be a robot? Certainly, some see robots as a hypothetical future replacement for human carers. But a question roboticists are asking is: how human should these future robot companions be?

A companion robot is one that is capable of providing useful assistance in a socially acceptable manner. This means that a robot companion’s first goal is to assist humans. Robot companions are mainly developed to help people with special needs such as older people, autistic children or the disabled. They usually aim to help in a specific environment: a house, a care home or a hospital.

 

 

 

The Next President Will Decide the Fate of Killer Robots—and the Future of War – from wired.com by Heather Roff and P.W. Singer

Excerpt:

The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.

 

 

Your new manager will be an algorithm — from stevebrownfuturist.com

Excerpt:

It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.

Algorithms for task management
Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of “temporal Tetris” with the parcels and packs them to optimize for space and for the planned delivery route–packages that are delivered first are towards the front, packages for the end of the route are placed at the back.

 

 

Beware of biases in machine learning: One CTO explains why it happens — from enterprisersproject.com by Minda Zetlin

Excerpt:

The Enterprisers Project (TEP): Machines are genderless, have no race, and are in and of themselves free of bias. How does bias creep in?

Sharp: To understand how bias creeps in you first need to understand the difference between programming in the traditional sense and machine learning. With programming in the traditional sense, a programmer analyses a problem and comes up with an algorithm to solve it (basically an explicit sequence of rules and steps). The algorithm is then coded up, and the computer executes the programmer’s defined rules accordingly.

With machine learning, it’s a bit different. Programmers don’t solve a problem directly by analyzing it and coming up with their rules. Instead, they just give the computer access to an extensive real-world dataset related to the problem they want to solve. The computer then figures out how best to solve the problem by itself.

 

 

Technology vs. Humanity – The coming clash between man and machine — from futuristgerd.com by Gerd Leonhard

Excerpt (emphasis DSC):

In his latest book ‘Technology vs. Humanity’, futurist Gerd Leonhard once again breaks new ground by bringing together mankind’s urge to upgrade and automate everything (including human biology itself) with our timeless quest for freedom and happiness.

Before it’s too late, we must stop and ask the big questions: How do we embrace technology without becoming it? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth.

Digital transformation has migrated from the mainframe to the desktop to the laptop to the smartphone, wearables and brain-computer interfaces. Before it moves to the implant and the ingestible insert, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange.

 

 

Ethics: Taming our technologies
The Ethics of Invention: Technology and the Human Future — from nature.com by Sheila Jasanoff

Excerpt:

Technological innovation in fields from genetic engineering to cyberwarfare is accelerating at a breakneck pace, but ethical deliberation over its implications has lagged behind. Thus argues Sheila Jasanoff — who works at the nexus of science, law and policy — in The Ethics of Invention, her fresh investigation. Not only are our deliberative institutions inadequate to the task of oversight, she contends, but we fail to recognize the full ethical dimensions of technology policy. She prescribes a fundamental reboot.

Ethics in innovation has been given short shrift, Jasanoff says, owing in part to technological determinism, a semi-conscious belief that innovation is intrinsically good and that the frontiers of technology should be pushed as far as possible. This view has been bolstered by the fact that many technological advances have yielded financial profit in the short term, even if, like the ozone-depleting chlorofluorocarbons once used as refrigerants, they have proved problematic or ruinous in the longer term.

 

 

 

Robotics is coming faster than you think — from forbes.com by Kevin O’Marah

Excerpt:

This week, The Wall Street Journal featured a well-researched article on China’s push to shift its factory culture away from labor and toward robots. Reasons include a rise in labor costs, the flattening and impending decrease in worker population and falling costs of advanced robotics technology.

Left unsaid was whether this is part of a wider acceleration in the digital takeover of work worldwide. It is.

 

 

Adidas will open an automated, robot-staffed factory next year — from businessinsider.com

 

 

 

Beyond Siri, the next-generation AI assistants are smarter specialists — from fastcompany.com by Jared Newman
SRI wants to produce chatbots with deep knowledge of specific topics like banking and auto repair.

 

 

 

Machine learning
Of prediction and policy — from economist.com
Governments have much to gain from applying algorithms to public policy, but controversies loom

Excerpt:

FOR frazzled teachers struggling to decide what to watch on an evening off (DC insert: a rare event indeed), help is at hand. An online streaming service’s software predicts what they might enjoy, based on the past choices of similar people. When those same teachers try to work out which children are most at risk of dropping out of school, they get no such aid. But, as Sendhil Mullainathan of Harvard University notes, these types of problem are alike. They require predictions based, implicitly or explicitly, on lots of data. Many areas of policy, he suggests, could do with a dose of machine learning.

Machine-learning systems excel at prediction. A common approach is to train a system by showing it a vast quantity of data on, say, students and their achievements. The software chews through the examples and learns which characteristics are most helpful in predicting whether a student will drop out. Once trained, it can study a different group and accurately pick those at risk. By helping to allocate scarce public funds more accurately, machine learning could save governments significant sums. According to Stephen Goldsmith, a professor at Harvard and a former mayor of Indianapolis, it could also transform almost every sector of public policy.

But the case for code is not always clear-cut. Many American judges are given “risk assessments”, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites. (Northpointe, the algorithm provider, disputes the finding.)

 

 

‘Software is eating the world’: How robots, drones and artificial intelligence will change everything — from business.financialpost.com

 

 

Thermostats can now get infected with ransomware, because 2016 — from thenextweb.com by Matthew Hughes

 

 

Who will own the robots? — from technologyreview.com by David Rotman
We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates

 

 

Police Drones Multiply Across the Globe — from dronelife.com by Jason Reagan

 

 

 

LinkedIn lawsuit may signal a losing battle against ‘botnets’, say experts — from bizjournals.com by Annie Gaus

 

 

 

China’s Factories Count on Robots as Workforce Shrinks — from wsj.com by Robbie Whelan and Esther Fung
Rising wages, cultural changes push automation drive; demand for 150,000 robots projected for 2018

 

 

 

viv-ai-june2016

 

viv-ai-2-june2016

 

 

Researchers Are Growing Living Biohybrid Robots That Move Like Animals — from slate.com by Victoria Webster

 

 

 

Addendums on 9/14/16:

 

 
© 2016 Learning Ecosystems