1 The heavens declare the glory of God;
the skies proclaim the work of his hands. 2 Day after day they pour forth speech;
night after night they reveal knowledge. 3 They have no speech, they use no words;
no sound is heard from them. 4 Yet their voice goes out into all the earth,
their words to the ends of the world.
In the heavens God has pitched a tent for the sun.
Given the sheer pace and acceleration of technological advances in recent years, business leaders can be forgiven for feeling dazed and perhaps a little frustrated. When we talked to CEOs as part of our annual Global CEO Survey, 61% of them told us they were concerned about the speed of technological change in their industries. Sure, more and more C-suite executives are genuinely tech-savvy – increasingly effective champions for their companies’ IT vision – and more and more of them know that digital disruption can be friend as well as enemy. But it’s fair to say that most struggle to find the time and energy necessary to keep up with the technologies driving transformation across every industry and in every part of the world.
Not one catalyst, but several
History is littered with companies that have waited out the Next New Thing in the belief that it’s a technology trend that won’t amount to much, or that won’t affect their industries for decades. Yet disruption happens. It’s safe to say that the history of humankind is a history of disruption – a stream of innovations that have tipped the balance in favour of the innovators. In that sense, technological breakthroughs are the original megatrend. What’s unique in the 21st century, though, is the ubiquity of technology, together with its accessibility, reach, depth, and impact.
Business leaders worldwide acknowledge these changes, and have a clear sense of their significance. CEOs don’t single out any particular catalyst that leads them to that conclusion. But we maintain that technological advancements are appearing, rapidly and simultaneously, in fields as disparate as healthcare and industrial manufacturing, because of the following concurrent factors…
From DSC: For those of us working in K-20 as well as in the corporate training/L&D space, how are we doing in getting people trained and ready to deal these developments?
From DSC: The pace of technological development is moving extremely fast; the ethical, legal, and moral questions are trailing behind it (as is normally the case). But this exponential pace continues to bring some questions, concerns, and thoughts to my mind. For example:
What kind of future do we want?
Just because we can, should we?
Who is going to be able to weigh in on the future direction of some of these developments?
If we follow the trajectories of some of these pathways, where will these trajectories take us? For example, if many people are out of work, how are they going to purchase the products and services that the robots are building?
These and other questions arise when you look at the articles below.
This is the 8th part of a series of postings regarding this matter.
The other postings are in the Ethics section.
What would your ideal robot be like? One that can change nappies and tell bedtime stories to your child? Perhaps you’d prefer a butler that can polish silver and mix the perfect cocktail? Or maybe you’d prefer a companion that just happened to be a robot? Certainly, some see robots as a hypothetical future replacement for human carers. But a question roboticists are asking is: how human should these future robot companions be?
A companion robot is one that is capable of providing useful assistance in a socially acceptable manner. This means that a robot companion’s first goal is to assist humans. Robot companions are mainly developed to help people with special needs such as older people, autistic children or the disabled. They usually aim to help in a specific environment: a house, a care home or a hospital.
The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.
It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.
Algorithms for task management
Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of “temporal Tetris” with the parcels and packs them to optimize for space and for the planned delivery route–packages that are delivered first are towards the front, packages for the end of the route are placed at the back.
The Enterprisers Project (TEP): Machines are genderless, have no race, and are in and of themselves free of bias. How does bias creep in?
Sharp: To understand how bias creeps in you first need to understand the difference between programming in the traditional sense and machine learning. With programming in the traditional sense, a programmer analyses a problem and comes up with an algorithm to solve it (basically an explicit sequence of rules and steps). The algorithm is then coded up, and the computer executes the programmer’s defined rules accordingly.
With machine learning, it’s a bit different. Programmers don’t solve a problem directly by analyzing it and coming up with their rules. Instead, they just give the computer access to an extensive real-world dataset related to the problem they want to solve. The computer then figures out how best to solve the problem by itself.
In his latest book ‘Technology vs. Humanity’, futurist Gerd Leonhard once again breaks new ground by bringing together mankind’s urge to upgrade and automate everything (including human biology itself) with our timeless quest for freedom and happiness.
Before it’s too late, we must stop and ask the big questions:How do we embrace technology without becoming it? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth.
Digital transformation has migrated from the mainframe to the desktop to the laptop to the smartphone, wearables and brain-computer interfaces. Before it moves to the implant and the ingestible insert, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange.
Technological innovation in fields from genetic engineering to cyberwarfare is accelerating at a breakneck pace, but ethical deliberation over its implications has lagged behind. Thus argues Sheila Jasanoff — who works at the nexus of science, law and policy — in The Ethics of Invention, her fresh investigation. Not only are our deliberative institutions inadequate to the task of oversight, she contends, but we fail to recognize the full ethical dimensions of technology policy. She prescribes a fundamental reboot.
Ethics in innovation has been given short shrift, Jasanoff says, owing in part to technological determinism, a semi-conscious belief that innovation is intrinsically good and that the frontiers of technology should be pushed as far as possible. This view has been bolstered by the fact that many technological advances have yielded financial profit in the short term, even if, like the ozone-depleting chlorofluorocarbons once used as refrigerants, they have proved problematic or ruinous in the longer term.
Machine learning Of prediction and policy — from economist.com Governments have much to gain from applying algorithms to public policy, but controversies loom
Excerpt:
FOR frazzled teachers struggling to decide what to watch on an evening off (DC insert: a rare event indeed), help is at hand. An online streaming service’s software predicts what they might enjoy, based on the past choices of similar people. When those same teachers try to work out which children are most at risk of dropping out of school, they get no such aid. But, as Sendhil Mullainathan of Harvard University notes, these types of problem are alike. They require predictions based, implicitly or explicitly, on lots of data. Many areas of policy, he suggests, could do with a dose of machine learning.
Machine-learning systems excel at prediction. A common approach is to train a system by showing it a vast quantity of data on, say, students and their achievements. The software chews through the examples and learns which characteristics are most helpful in predicting whether a student will drop out. Once trained, it can study a different group and accurately pick those at risk. By helping to allocate scarce public funds more accurately, machine learning could save governments significant sums. According to Stephen Goldsmith, a professor at Harvard and a former mayor of Indianapolis, it could also transform almost every sector of public policy.
…
But the case for code is not always clear-cut. Many American judges are given “risk assessments”, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites. (Northpointe, the algorithm provider, disputes the finding.)
Who will own the robots?— from technologyreview.com by David Rotman We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates
Few disciplines in today’s world play such a significant role in how society operates and what we can do to protect our future. Few fields of study can play such a profound role in protecting people’s lives on a daily basis, whether you realize it or not. And few can bring together so many disparate ideas, from sciences to social sciences to humanities to the arts, like the study of the Earth can.
…
Here are some of the ways that taking a course in the geology will impact your life for the rest of it.
… Climate: Now, so far I’ve talked about all the fun parts of geology. However, if you’re looking for work that is important to you, your family and society across the planet, geology is the place to be. First off, geology is ground zero for understanding climate change across the history of Earth. We’ve been studying the variation in the planet’s ecosystems for two centuries now (heck, paleontology helped start the discipline) and can look back billions of years to see how the climate has varied. This gives us that evidence to show how much our current climate is likely in a state of distress. Geology is also how we can understand what the impact of climate change will be on our planet, both in the short- and long-term.
The Perfect Storm of Market Inhibitors
There are five major convergent inhibitors driving the global revenues for self-paced eLearning downward:
Intense commoditization
The eLearning product lifecycle is in the final stage and suppliers are diversifying their product catalogs beyond eLearning
The collapse of the global LMS market
Profound degree of product substitution
The leapfrog effect in mobile-only countries
None of these inhibitors are reversible. Combined, they are driving the global eLearning market into steep declines in revenue. Any one of these inhibitors would dampen the demand for eLearning, but the presence of all five creates very unfavorable market conditions for suppliers.
Distributed ledger technology (blockchain) has the potential to drive simplicity and efficiency by establishing new financial services infrastructure and processes
Distributed ledger technology will form the foundation of next generation financial services infrastructure in conjunction with other existing and emerging technologies
Similar to technological advances in the past, new financial services infrastructure will transform and question traditional orthodoxies in today’s business models
The most impactful distributed ledger technology applications will require deep collaboration between incumbents, innovators, and regulators, adding complexity and delaying implementation
The report is centered on use cases, considering how distributed ledger technology could benefit each scenario. How will blockchain transform the future of financial services?
Ernst & Young, leading consulting firm, one of the “Big Four” audit firms and the third largest professional services firm in the world, has made some predictions about the future of the blockchain technology and its significance in various industry sectors in the recent report.
The attention of multiple financial companies has been focused on the blockchain lately. This unique technology is well adaptable to the increasing requirements of secure bookkeeping and automation in various industries.
The EY report predicts that blockchain will reach critical mass in financial services in 3-5 years, with other industries following quickly. “One reason the blockchain reaction is racing toward critical mass faster than previous disruptive technologies is that it is arriving in the midst of the digital transformation already sweeping through most sectors of the global economy. Consequently, despite the obstacles still to be overcome, businesspeople and governments are preconditioned to recognize blockchain’s potential. Tech companies have already established much of the digital infrastructure required to realize blockchain business visions.”
From DSC: Applying this technology towards the world of learning…
I wonder how blockchain might impact credentialing for lifelong learning, and will it be integrated into services available via tvOS-based applications? This type of cloud-based offering/service could likely be a piece of our future learning ecosystems. Innovative, forward-thinking institutions should put this on their radar now, and start working on such efforts.
1 Sing to the Lord a new song;
sing to the Lord, all the earth.
2 Sing to the Lord, praise his name;
proclaim his salvation day after day.
3 Declare his glory among the nations,
his marvelous deeds among all peoples.
4 For great is the Lord and most worthy of praise;
he is to be feared above all gods.
5 For all the gods of the nations are idols,
but the Lord made the heavens.
6 Splendor and majesty are before him;
strength and glory are in his sanctuary.
7 Ascribe to the Lord, all you families of nations,
ascribe to the Lord glory and strength.
8 Ascribe to the Lord the glory due his name;
bring an offering and come into his courts.
9 Worship the Lord in the splendor of his holiness;
tremble before him, all the earth.
10 Say among the nations, “The Lord reigns.”
The world is firmly established, it cannot be moved;
he will judge the peoples with equity.
11 Let the heavens rejoice, let the earth be glad;
let the sea resound, and all that is in it.
12 Let the fields be jubilant, and everything in them;
let all the trees of the forest sing for joy.
13 Let all creation rejoice before the Lord, for he comes,
he comes to judge the earth.
He will judge the world in righteousness
and the peoples in his faithfulness.
From DSC:
The articles below demonstrate why the need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!
U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities — from pewinternet.org by Cary Funk, Brian Kennedy and Elizabeth Podrebarac Sciupac Americans are more worried than enthusiastic about using gene editing, brain chip implants and synthetic blood to change human capabilities
Human Enhancement— from pewinternet.org by David Masci The Scientific and Ethical Dimensions of Striving for Perfection
The report is far more sedate. It is a draft report, not a bill, with a mixed bag of recommendations to the Commission on Civil Law Rules on Robotics in the European Parliament. It will be years before anything is decided.
Nevertheless, it is interesting reading when considering how society should adapt to increasingly capable autonomous machines: what should the legal and moral status of robots be? How do we distribute responsibility?
A remarkable opening
The report begins its general principles with an eyebrow-raising paragraph:
whereas, until such time, if ever, that robots become or are made self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws cannot be converted into machine code;
It is remarkable because first it alludes to self-aware robots, presumably moral agents – a pretty extreme and currently distant possibility – then brings up Isaac Asimov’s famous but fictional laws of robotics and makes a simultaneously insightful and wrong-headed claim.
That murmur is self-doubt, and its presence helps keep us alive. But robots don’t have this instinct—just look at the DARPA Robotics Challenge. But for robots and drones to exist in the real world, they need to realize their limits. We can’t have a robot flailing around in the darkness, or trying to bust through walls. In a new paper, researchers at Carnegie Mellon are working on giving robots introspection, or a sense of self-doubt. By predicting the likelihood of their own failure through artificial intelligence, robots could become a lot more thoughtful, and safer as well.
If you met this lab-created critter over your beach vacation, you’d swear you saw a baby ray. In fact, the tiny, flexible swimmer is the product of a team of diverse scientists. They have built the most successful artificial animal yet. This disruptive technology opens the door much wider for lifelike robots and artificial intelligence.
From DSC: I don’t think I’d use the term disruptive here — though that may turn out to be the case. The word disruptive doesn’t come close to carrying/relaying the weight and seriousness of this kind of activity; nor does it point out where this kind of thing could lead to.
Todd Richmond, a director at the Institute for Creative Technologies at the University of Southern California, says a big debate is brewing over who controls digital assets associated with real world property.
“This is the problem with technology adoption — we don’t have time to slowly dip our toe in the water,” he says. “Tenants have had no say, no input, and now they’re part of it.”
From DSC: I greatly appreciate what Pokémon Go has been able to achieve and although I haven’t played it, I think it’s great (great for AR, great for peoples’ health, great for the future of play, etc.)! So there are many positives to it. But the highlighted portion above is not something we want to have to say occurred with artificial intelligence, cognitive computing, some types of genetic engineering, corporations tracking/using your personal medical information or data, the development of biased algorithms, etc.
The links I have included in this column have been carefully chosen as recommended reading to support my firm conviction that machine learning and artificial intelligence are the keys to just about every aspect of life in the very near future: every sector, every business.
Now there is an A.I. that can do your job. Customers can direct exactly how their new website should look. Fancy something more colorful? You got it. Less quirky and more professional? Done. This A.I. is still in a limited beta but it is coming. It’s called The Grid and it came out of nowhere. It makes you feel like you are interacting with a human counterpart. And it works.
Artificial intelligence has arrived. Time to sharpen up those resumes.
It seems like, in general, technology always races ahead of the moral implications of using it. This seems to be true of everything from atomic power to sequencing genomes. Scientists often create something because they can, because there is a perceived need for it, or even by accident as a result of research. Only then does the public catch up and start to form an opinion on the issue.
…
Which brings us to the science of augmenting humans with technology, a process that has so far escaped the public scrutiny and opposition found with other radical sciences. Scientists are not taking any chances, with several yearly conferences already in place as a forum for scientists, futurists and others to discuss the process of human augmentation and the moral implications of the new science.
That said, it seems like those who would normally oppose something like this have remained largely silent.
Google Created Its Own Laws of Robotics — from fastcodesign.com by John Brownlee Building robots that don’t harm humans is an incredibly complex challenge. Here are the rules guiding design at Google.
Oh, that my ways were steadfast
in obeying your decrees!
Then I would not be put to shame
when I consider all your commands.
I will praise you with an upright heart
as I learn your righteous laws.
I will obey your decrees;
do not utterly forsake me.
How can a young person stay on the path of purity?
By living according to your word.
I seek you with all my heart;
do not let me stray from your commands.
I have hidden your word in my heart
that I might not sin against you.
Just as the world’s precious artworks and monuments need a touch-up to look their best, the home we’ve built to host the world’s cultural treasures online needs a lick of paint every now and then. We’re ready to pull off the dust sheets and introduce the new Google Arts & Culture website and app, by the Google Cultural Institute. The app lets you explore anything from cats in art since 200 BCE to the color red in Abstract Expressionism, and everything in between. Our new tools will help you discover works and artifacts, allowing you to immerse yourself in cultural experiences across art, history and wonders of the world—from more than a thousand museums across 70 countries…
From DSC:
I read the article mentioned below. It made me wonder how 3 of the 4 main highlights that Fred mentioned (that are coming to Siri with tvOS 10) might impact education/training/learning-related applications and offerings made possible via tvOS & Apple TV:
Live broadcasts
Topic-based searches
The ability to search YouTube via Siri
The article prompted me to wonder:
Will educators and trainers be able to offer live lectures and training (globally) that can be recorded and later searched via Siri?
What if second screen devices could help learners collaborate and participate in active learning while watching what’s being presented on the main display/”TV?”
What if learning taken this way could be recorded on one’s web-based profile, a profile that is based upon blockchain-based technologies and maintained via appropriate/proven organizations of learning? (A profile that’s optionally made available to services from Microsoft/LinkedIn.com/Lynda.com and/or to a service based upon IBM’s Watson, and/or to some other online-based marketplace/exchange for matching open jobs to potential employees.)
Or what if you could earn a badge or prove a competency via this manner?
Hmmm…things could get very interesting…and very powerful.
More choice. More control. Over one’s entire lifetime.
The forthcoming update to Apple TV continues to bring fresh surprises for owners of Apple’s set top box. Many improvements are coming to tvOS 10, including single-sign-on support and an upgrade to Siri’s capabilities. Siri has already opened new doors thanks to the bundled Siri Remote, which simplifies many functions on the Apple TV interface. Four main highlights are coming to Siri with tvOS 10, which is expected to launch this fall.
CBS today announced the launch of an all-new Apple TV app that will center around the network’s always-on, 24-hour “CBSN” streaming network and has been designed exclusively for tvOS. In addition to the live stream of CBSN, the app curates news stories and video playlists for each user based on previously watched videos.
The new app will also take advantage of the 4th generation Apple TV’s deep Siri integration, allowing users to tell Apple’s personal assistant that they want to “Watch CBS News” to immediately start a full-screen broadcast of CBSN. While the stream is playing, users can interact with other parts of the app to browse related videos, bookmark some to watch later, and begin subscribing to specific playlists and topics.
Enter the blockchain, the first native digital medium for peer to peer value exchange. Its protocol establishes the rules — in the form of globally distributed computations and heavy duty encryption — that ensure the integrity of the data traded among billions of devices without going through a trusted third party. Trust is hard-coded into the platform. That’s why we call it the Trust Protocol. It acts as a ledger of accounts, a database, a notary, a sentry, and clearing house, all by consensus.
My own personal fascination with blockchain technology lies in its potential for makerspaces and its role in the Maker Movement. The blockchain by nature is decentralized (peer-to-peer), distributed and open-source…the blueprint for makerspaces. Makerspaces both in and out of schools are about decentralizing and widening-access. This includes not only access to the spaces themselves, but also to equipment and resources. I have written before about he potential of Open Educational Resources (OER) in a makerspace. Blockchain technology could further open up access and use of resources, making our educational system that much more open and flexible.
Within schools, giving students credit for the skills they gain in a makerspace is always a challenge. The blockchain offers a real possiblity for managing and processing these types of credentials. Outside of school, no standard exists for certification or credentialing in a makerspace. You might be certified to use a tool in one makerspace, but walk into another and not be able to use that same tool there. Blockchain technology can help streamline and create a new standard for these types of certifications.
Blockchain is undoubtedly one of the hottest technologies around at the moment. Whilst it has gained most of its notoriety for the Bitcoin financial technology, it also has a number of other possible applications.
For instance, John Holden and Greg Irving from the University of Cambridge use blockchain technology as a means of making clinical trial documents immutable. The pair wanted to tackle the thorny issue of ensuring that research data is untampered with, so that external people can have confidence in the results from the trials.
They highlighted the use of blockchain in cardiovascular diabetes and ethanol research via a report published on the F1000 website.
Some remarkable things are happening in the bitcoin/blockchain space. It’s hard to believe that just seven years ago, this technology was first revealed on an email list followed by just a handful of cypherpunks, who should be remembered by history as brilliant and dedicated innovators of a revolutionary technology. Today what they did has grabbed the attention of the world’s largest financial institutions and central banks.
And why? It’s clear this innovation is not going away. It’s simply a better method for exchanging value globally. It is a technology that will influence—if not define—the future of payments and money. It is already disrupting the status quo in more ways than even the best experts can track, especially in emerging markets.
Four of us blockchainers actually visited Janet Yellen [Chair of the Board of Governors of the Federal Reserve System] in her office.
One of the loudest evangelists is IBM, which has been touting the potential of blockchain—a technology that can allow companies to create quick, tamper-proof ledgers—to transform everything from finance to trading to insurance.
On Tuesday, IBM announced the formal launch of a so-called “Bluemix Garage” in New York, where developers can experiment with financial-tech software and explore new forms of blockchain innovation.
It’s a fine idea and one that could serve IBM’s long-term strategic interests. Namely, if developers flock to IBM’s platform, the company will be well-positioned to grab a big share of the “blockchain-as-a-service” market—a still nascent industry dedicated to helping firms navigate the world of ledgers, smart contracts, and all that other good stuff.
Excerpt:
Distributed Ledger Technology (DLT), or ‘blockchain’, has started to receive increasing media attention and investment from several sectors including governments, financial services and the creative industries. The potential application in relation to music is of particular interest, as it appears to offer solutions to problems artists have highlighted for decades – around transparency, the sharing of value and the relationships with intermediaries that sit between the artist and fan, the central and most important relationship in music.
If blockchain technology can help the commercial and contractual relationships in music keep pace with technology and the communication between artists and fans then it could be truly revolutionary.
According to the report, there are four main areas where blockchain could transform the music industries:
A single, networked database for music copyright information, rather than the many, not-quite-complete databases maintained at present;
fast, frictionless royalty payments, whereas payments can currently take years;
transparency through the value chain, allowing musicians and their managers to see exactly how much money they are owed, as opposed to a culture of non-disclosure agreements and “black boxes”; and
access to alternative sources of capital, with smart contracts – contracts implemented via software – potentially transforming crowdfunding and leading to the establishment of ‘artist accelerators’ on the model of tech start-ups.
In her recent blog, KnowledgeWorks Senior Director of Strategic Foresight Katherine Prince lists six challenges that K-12 education faces. I’m not going to go all panacea on you, but let me illustrate how blockchain technology could address at least one of those issues and gesture to how it could play an important — if not central — part in addressing all six.
Which of today’s emerging technologies have a chance at solving a big problem and opening up new opportunities? Here are our picks. The 10 on this list all had an impressive milestone in the past year or are on the verge of one. These are technologies you need to know about right now.
We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.
–Edward Cornish
From DSC: This posting represents Part VI in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III, Part IV, Part V, and to ask:
How do we collectively start talking about the future that we want?
How do we go about creating our dreams, not our nightmares?
Most certainly, governments will be involved….but who else should be involved in these discussions? Shouldn’t each one of us participate in some way, shape, or form?
But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
…
If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.
Facebook has just revealed DeepText, a deep learning AI that will analyze everything you post or type and bring you closer to relevant content or Facebook services.
March of the machines — from economist.com What history tells us about the future of artificial intelligence—and how society should respond
Excerpt:
EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.
After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.
As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.
CHICAGO — When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.
The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.
…
Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.
Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.
The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.
But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.
Policy paper | Data Science Ethical Framework — from gov.uk
From: Cabinet Office, Government Digital Service and The Rt Hon Matt Hancock MP
First published: 19 May 2016
Part of: Government transparency and accountability
This framework is intended to give civil servants guidance on conducting data science projects, and the confidence to innovate with data.
Detail: Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking. We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with. The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog.
Excerpt (emphasis DSC):
We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.
…
The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values.If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws.
The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.
This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.
A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.
The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.
We already know the National Security Agency is all up in our data, but the agency is reportedly looking into how it can gather even more foreign intelligence information from internet-connected devices ranging from thermostats to pacemakers. Speaking at a military technology conference in Washington D.C. on Friday, NSA deputy director Richard Ledgett said the agency is “looking at it sort of theoretically from a research point of view right now.” The Intercept reports Ledgett was quick to point out that there are easier ways to keep track of terrorists and spies than to tap into any medical devices they might have, but did confirm that it was an area of interest.
You may love being able to set your thermostat from your car miles before you reach your house, but be warned — the NSA probably loves it too. On Friday, the National Security Agency — you know, the federal organization known for wiretapping and listening it on U.S. citizens’ conversations — told an audience at Washington’s Newseum that it’s looking into using the Internet of Things and other connected devices to keep tabs on individuals.
Humans are willing to trust chatbots with some of their most sensitive information — from businessinsider.com by Sam Shead Excerpt:
A study has found that people are inclined to trust chatbots with sensitive information and that they are open to receiving advice from these AI services. The “Humanity in the Machine” report —published by media agency Mindshare UK on Thursday — urges brands to engage with customers through chatbots, which can be defined as artificial intelligence programmes that conduct conversations with humans through chat interfaces.
Williamstown, Kentucky, 30 miles south of Cincinnati, is home to Ark Encounter, a theme park whose main attraction is a reproduction, literally of biblical proportions, of the very famous Noah’s ark.
Being 510 feet long (or 300 cubits, strictly following the biblical description of the legendary ship), the ark is already the biggest timber-framed structure in the United States. Actually, its total surface area rises up to 120,000 square feet; that makes it just over twice as large as the White House, and longer than three space shuttles laid end to end — so, yes, you can bet Noah indeed had room for a pair of each animal.