From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want. Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)
Gerd Leonhard’s work is relevant here. In the resource immediately below, Gerd asserts:
I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.
I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.
Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard
.
Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts
Excerpt (emphasis DSC):
A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.
As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.
In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.
“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”
First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness
Excerpt:
Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.
Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.
Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.
Excerpt:
Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review. Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.
These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.
Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org
Excerpt:
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.
The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.
Along these same lines, also see:
- Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
Excerpt:
With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.
Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by
Excerpt (emphasis DSC):
Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?
“In the future, moral philosophy will be a key industry sector,” says Russell.
Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.
Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley
Excerpt:
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.
An excerpt from:
THREE: CHALLENGES FOR LAW AND POLICY
AR systems change human experience and, consequently, stand to challenge certain assumptions of law and policy. The issues AR systems raise may be divided into roughly two categories. The first is collection, referring to the capacity of AR devices to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability. The second rough category is display, referring to the capacity of AR to overlay information over people and places in something like real-time. Display raises a variety of complex issues ranging from
possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling. Policymakers and stakeholders interested in AR should consider what these issues mean for them. Issues related to the collection of information include…
HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio
Excerpt (emphasis DSC):
Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.
Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.
The department transcends traditional roles when data enters the picture.
Many ethical questions posed through technology easily come and go because they seem out of this world.
18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,
Excerpt:
Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
…
“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games. That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”
Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger
Excerpts:
As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives. Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low. Algorithms are defining the future of business and even our everyday lives.
…
Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming. The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”
Related items:
- Toyota, Tesla or Google: who’s spending the most on artificial intelligence in the car industry? — from techworld.com by Margi Murphy
Which businesses are investing in artificial intelligence? Read how artificial intelligence will change the car industry, and other sectors, as we know it. - Toyota Invests $1 Billion In Artificial Intelligence In US — from wtvox.com / Associated Press
- Top 10 Emerging Technologies That Could Transform Our Future — from wtvox.com by E Aston
- What Technology Will Look Like In Five Years — from techcrunch.com by Diomedes Kastanis
- Welcome To Brain Science’s Next Frontier: Virtual Reality — from fastcompany.com by Tina Amirtha
Amy Robinson, executive director at the startup EyeWire, is making neuroscience into a playground for the hot tech du jour.
Addendums:
- ABI Research Shows Augmented Reality on the Rise with Total Market Worth to Reach $100 Billion by 2020 — from abiresearch.com
- ‘Self-Driving Cars Will Dominate the Roads by 2030,’ Says Internet of Things Visionary — from coinspeaker.com by
- Robots are learning to say “no” to human orders — from quartz.com by Kit Eaton
Excerpt:
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers in Massachusetts are trying something that many a science fiction movie has already anticipated: They’re teaching robots to say “no” to some instructions. For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming. But we’re building more-clever robots every day and we’re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
Addendum on 12/14/15:
- Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.