{"id":52368,"date":"2015-11-19T16:20:30","date_gmt":"2015-11-19T21:20:30","guid":{"rendered":"http:\/\/danielschristian.com\/learning-ecosystems\/?p=52368"},"modified":"2016-03-25T13:28:22","modified_gmt":"2016-03-25T17:28:22","slug":"the-need-for-ethics-morals-policies-serious-reflection-about-what-kind-of-future-we-want-has-never-been-greater","status":"publish","type":"post","link":"http:\/\/danielschristian.com\/learning-ecosystems\/2015\/11\/19\/the-need-for-ethics-morals-policies-serious-reflection-about-what-kind-of-future-we-want-has-never-been-greater\/","title":{"rendered":"The need for ethics, morals, policies, &#038; serious reflection about what kind of future we want has never been greater!"},"content":{"rendered":"<hr \/>\n<p style=\"text-align: left;\"><em><span style=\"color: #800000;\">From DSC:<\/span><\/em><br \/>\n<span style=\"color: #800000;\">This posting is meant to surface the need for debates\/discussions, new policy decisions, and for taking the time to seriously reflect upon <em><strong>what type of future that we want.<\/strong><\/em>\u00a0 Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future &#8212; instead of just letting things happen because they <em>can<\/em> happen. (i.e., just because something <em>can<\/em> be done doesn&#8217;t mean it <em>should<\/em> be done.)<\/span><\/p>\n<p style=\"text-align: left;\"><span style=\"color: #800000;\"><a href=\"http:\/\/www.futuristgerd.com\" target=\"_blank\">Gerd Leonhard&#8217;s work<\/a> is relevant here.\u00a0 In the resource immediately below, Gerd asserts:<\/span><\/p>\n<blockquote><p><span style=\"color: #008000;\">I believe we urgently need to start debating and crafting <strong>a global Digital Ethics Treaty<\/strong>. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.<\/span><\/p><\/blockquote>\n<p><span style=\"color: #800000;\">I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we&#8217;re moving now.<\/span><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p style=\"text-align: left;\"><strong><a href=\"http:\/\/www.futuristgerd.com\/2015\/10\/31\/redefining-the-relationship-of-man-and-machine-here-is-my-chapter-from-the-brilliant-the-future-of-business-book\/\" target=\"_blank\">Redefining the relationship of man and machine: here is my narrated chapter from the \u2018The Future of Business\u2019 book (video, audio and pdf)<\/a><\/strong> &#8212; from futuristgerd.com by Gerd Leonhard<\/p>\n<p style=\"text-align: right;\"><span style=\"color: #ffffff;\">.<\/span><\/p>\n<p style=\"text-align: left;\"><a href=\"http:\/\/www.futuristgerd.com\/2015\/10\/31\/redefining-the-relationship-of-man-and-machine-here-is-my-chapter-from-the-brilliant-the-future-of-business-book\/\"><img decoding=\"async\" class=\"aligncenter size-full wp-image-52283\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/DigitalEthics-GerdLeonhard-Oct2015.jpg\" alt=\"DigitalEthics-GerdLeonhard-Oct2015\" width=\"100%\" height=\"100%\" border=\"0\" srcset=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/DigitalEthics-GerdLeonhard-Oct2015.jpg 600w, http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/DigitalEthics-GerdLeonhard-Oct2015-150x85.jpg 150w\" sizes=\"(max-width: 600px) 100vw, 600px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.theguardian.com\/technology\/2015\/nov\/05\/robot-revolution-rise-machines-could-displace-third-of-uk-jobs\" target=\"_blank\"><strong>Robot revolution: rise of &#8216;thinking&#8217; machines could exacerbate inequality<\/strong><\/a> &#8212; from theguardian.com by Heather Stewart<br \/>\n<em>Global economy will be transformed over next 20 years at risk of growing inequality, say analysts<\/em><\/p>\n<p><em>Excerpt<span style=\"color: #800000;\"> (emphasis DSC):<\/span><\/em><\/p>\n<p style=\"padding-left: 30px;\">A \u201crobot revolution\u201d will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.<\/p>\n<p style=\"padding-left: 30px;\">As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to \u201cthink\u201d, performing analytical tasks once seen as requiring human judgment.<\/p>\n<p style=\"padding-left: 30px;\">In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.<\/p>\n<p style=\"padding-left: 30px;\"><strong><span style=\"color: #800000;\">\u201cWe are facing a paradigm shift which will change the way we live and work,\u201d the authors say. \u201cThe pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.\u201d<\/span><\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.theguardian.com\/technology\/2015\/nov\/05\/robot-revolution-rise-machines-could-displace-third-of-uk-jobs\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-52524\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/RobotRevolution-Nov2015.jpg\" alt=\"RobotRevolution-Nov2015\" width=\"475\" height=\"520\" srcset=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/RobotRevolution-Nov2015.jpg 475w, http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/RobotRevolution-Nov2015-137x150.jpg 137w\" sizes=\"auto, (max-width: 475px) 100vw, 475px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.telegraph.co.uk\/news\/science\/science-news\/11991905\/First-genetically-modified-humans-could-exist-within-two-years.html\" target=\"_blank\"><strong>First genetically modified humans could exist within two years<\/strong><\/a> &#8212; from telegraph.co.uk by Sarah Knapton<br \/>\n<em>Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness<\/em><\/p>\n<p><em>Excerpt:<\/em><\/p>\n<div class=\"firstPar\">\n<p style=\"padding-left: 30px;\">Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.<\/p>\n<\/div>\n<div class=\"secondPar\">\n<p style=\"padding-left: 30px;\">Editas Medicine, which is based in the US, said it plans to become the first lab in the world to \u2018genetically edit\u2019 the DNA of patients suffering from a genetic condition \u2013 in this case the blinding disorder \u2018leber congenital amaurosis\u2019.<\/p>\n<\/div>\n<p><img decoding=\"async\" class=\"aligncenter\" src=\"http:\/\/i.telegraph.co.uk\/multimedia\/archive\/02330\/dna_2330322b.jpg\" alt=\"\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong><a href=\"http:\/\/www.gartner.com\/smarterwithgartner\/gartner-predicts-our-digital-future\/\" target=\"_blank\">Gartner predicts our digital future<\/a> <\/strong>&#8212; from gartner.com by Heather Levy<br \/>\n<em>Gartner\u2019s Top 10 Predictions herald what it means to be human in a digital world.<\/em><\/p>\n<p><em>Excerpt:<\/em><\/p>\n<p style=\"padding-left: 30px;\">Here\u2019s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a \u201crobo-boss\u201d based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review. \u00a0Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.<\/p>\n<p style=\"padding-left: 30px;\">These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium\/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner\u2019s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.gartner.com\/smarterwithgartner\/gartner-predicts-our-digital-future\/\" target=\"_blank\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-52502\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/GartnerPredicts-Oct2015.jpg\" alt=\"GartnerPredicts-Oct2015\" width=\"835\" height=\"391\" srcset=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/GartnerPredicts-Oct2015.jpg 835w, http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/GartnerPredicts-Oct2015-150x70.jpg 150w\" sizes=\"auto, (max-width: 835px) 100vw, 835px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong><a href=\"http:\/\/phys.org\/news\/2015-11-univ-washington-faculty-legal-social.html\" target=\"_blank\">Univ. of Washington faculty study legal, social complexities of augmented reality<\/a><\/strong> &#8212; from phys.org<\/p>\n<p><em>Excerpt:<\/em><\/p>\n<div>\n<p style=\"padding-left: 30px;\">But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction\u2014as well as potential discrimination\u2014are bound to follow.<\/p>\n<p style=\"padding-left: 30px;\">The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science &amp; Engineering Department and other campus units to think through issues of technology policy. &#8220;Augmented Reality: A Technology and Policy Primer&#8221; is the lab&#8217;s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.<\/p>\n<\/div>\n<p><span style=\"color: #800000;\"><em>Along these same lines, also see:<\/em><\/span><\/p>\n<ul>\n<li><a href=\"http:\/\/www.rdmag.com\/news\/2015\/11\/augmented-reality-figuring-out-where-law-fits\" target=\"_blank\"><strong>Augmented Reality: Figuring Out Where the Law Fits<\/strong> <\/a>&#8212; from rdmag.com by Greg Watry<br \/>\n<em>Excerpt:<\/em><br \/>\nWith AR comes potential issues the authors divide into two categories. \u201cThe first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,\u201d the researchers write. The second issue is display, which \u201craises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.\u201dCurrent privacy law in the U.S. allows video and audio recording in areas that \u201cdo not attract an objectively reasonable expectation of privacy,\u201d says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, \u201cit has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,\u201d says Newell.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong><a href=\"http:\/\/ww2.kqed.org\/news\/2015\/10\/27\/stuart-russell-on-a-i-and-how-moral-philosophy-will-be-big-business\" target=\"_blank\">Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech<\/a><\/strong> &#8212; from kqed.org by <span class=\"byline\"><span class=\"author\">Queena Kim<\/span><\/span><\/p>\n<p><em>Excerpt<span style=\"color: #800000;\"> (emphasis DSC):<\/span><\/em><\/p>\n<p style=\"padding-left: 30px;\">Our first Big Think comes from <a href=\"https:\/\/www.eecs.berkeley.edu\/Faculty\/Homepages\/russell.html\" target=\"_blank\">Stuart Russell<\/a>. He\u2019s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?<\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #800000;\"><strong>\u201cIn the future, moral philosophy will be a key industry sector,\u201d says Russell.<\/strong><\/span><\/p>\n<p style=\"padding-left: 30px;\">Translation? In the future, the nature of human values and the process by which we make moral decisions will be\u00a0<i>big<\/i> business in tech.<\/p>\n<p>&nbsp;<\/p>\n<p class=\"entry-title\"><strong><a href=\"http:\/\/www.washington.edu\/news\/2015\/11\/03\/life-enhanced-uw-professors-study-legal-social-complexities-of-an-augmented-reality-future\/\" target=\"_blank\">Life, enhanced: UW professors study legal, social complexities of an augmented reality future<\/a><\/strong> &#8212; from washington.edu by Peter Kelley<\/p>\n<p><em>Excerpt:<\/em><\/p>\n<p style=\"padding-left: 30px;\">But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction \u2014 as well as potential discrimination \u2014 are bound to follow.<\/p>\n<p id=\"eow-description\"><span id=\"eow-title\" class=\"watch-title \" dir=\"ltr\" title=\"Max Tegmark and Nick Bostrom speak to the UN about the threat of AI\">\u00a0<\/span><\/p>\n<p><span style=\"color: #800000;\"><em>An excerpt from:<\/em><\/span><\/p>\n<p><a href=\"http:\/\/techpolicylab.org\/wp-content\/uploads\/2015\/10\/Augmented_Reality_Primer.pdf\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-52404\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/UW-AR-TechPolicyPrimer-Nov2015.jpg\" alt=\"UW-AR-TechPolicyPrimer-Nov2015\" width=\"397\" height=\"534\" srcset=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/UW-AR-TechPolicyPrimer-Nov2015.jpg 397w, http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/UW-AR-TechPolicyPrimer-Nov2015-112x150.jpg 112w\" sizes=\"auto, (max-width: 397px) 100vw, 397px\" \/><\/a><\/p>\n<p style=\"padding-left: 30px;\"><strong>THREE: CHALLENGES FOR LAW AND POLICY<\/strong><br \/>\nAR systems\u00a0 change\u00a0\u00a0 human\u00a0 experience\u00a0\u00a0 and,\u00a0 consequently,\u00a0\u00a0 stand\u00a0 to\u00a0\u00a0 challenge\u00a0\u00a0 certain assumptions\u00a0 of\u00a0 law\u00a0 and\u00a0 policy.\u00a0 The\u00a0 issues\u00a0 AR\u00a0 systems\u00a0 raise\u00a0 may\u00a0 be\u00a0 divided\u00a0 into\u00a0 roughly two\u00a0 categories.\u00a0 The\u00a0 first\u00a0 is\u00a0 collection,\u00a0 referring\u00a0 to\u00a0 the\u00a0 capacity\u00a0 of\u00a0 AR\u00a0 devices\u00a0 to\u00a0 record,\u00a0 or\u00a0 at\u00a0 least register,\u00a0 the people and\u00a0 places around\u00a0 the user.\u00a0 Collection\u00a0 raises obvious\u00a0 issues of\u00a0 privacy\u00a0 but\u00a0 also\u00a0 less\u00a0 obvious\u00a0 issues\u00a0 of\u00a0 free\u00a0 speech\u00a0 and\u00a0 accountability.\u00a0 The\u00a0 second\u00a0 rough\u00a0 category\u00a0 is\u00a0 display,\u00a0 referring\u00a0 to\u00a0 the\u00a0 capacity\u00a0 of\u00a0 AR\u00a0 to\u00a0 overlay\u00a0 information over\u00a0 people\u00a0 and places\u00a0 in\u00a0 something\u00a0 like\u00a0 real-time.\u00a0 Display\u00a0 raises\u00a0 a\u00a0 variety\u00a0 of\u00a0 complex\u00a0 issues\u00a0 ranging\u00a0 from<br \/>\npossible\u00a0 tort\u00a0 liability\u00a0 should\u00a0 the\u00a0 introduction\u00a0 or\u00a0 withdrawal\u00a0 of\u00a0 information\u00a0 lead\u00a0 to\u00a0 injury,\u00a0 to issues\u00a0\u00a0 surrounding\u00a0\u00a0 employment\u00a0\u00a0 discrimination\u00a0\u00a0 or\u00a0\u00a0 racial\u00a0\u00a0 profiling.\u00a0\u00a0 Policymakers\u00a0\u00a0 and stakeholders interested in AR should consider what these issues mean for them.\u00a0 Issues related to the collection of information include&#8230;<\/p>\n<p>&nbsp;<\/p>\n<p class=\"entry-title\"><a href=\"http:\/\/www.hrmorning.com\/hr-tech-is-getting-weird-and-heres-why\/\" target=\"_blank\"><strong>HR tech is getting weird, and here\u2019s why<\/strong><\/a> &#8212; from hrmorning.com by guest poster Julia Scavicchio<\/p>\n<p><em>Excerpt <span style=\"color: #800000;\">(emphasis DSC):<\/span><\/em><\/p>\n<p style=\"padding-left: 30px;\"><span style=\"color: #800000;\">Technology has progressed to the point where it\u2019s possible for HR to learn almost everything there is to know about employees \u2014 from what they\u2019re doing moment-to-moment at work to what they\u2019re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical\u00a0implications of these new investigative tools. \u00a0<\/span><span id=\"more-38920\"><\/span><\/p>\n<p style=\"padding-left: 30px;\">Why on Earth does HR need all this data? The answer is simple \u2014 HR is not on Earth, it\u2019s in the cloud.<\/p>\n<p style=\"padding-left: 30px;\">The department transcends traditional roles when data enters the picture.<\/p>\n<p style=\"padding-left: 30px;\">Many ethical questions posed through technology easily come and go because they seem out of this world.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/www.businessinsider.com\/artificial-intelligence-researchers-most-impressive-robots-2015-11\" target=\"_blank\"><strong>18 AI researchers reveal the most impressive thing they&#8217;ve ever seen<\/strong><\/a> &#8212; from businessinsider.com by Guia Marie Del Prado,<\/p>\n<p><em>Excerpt:<\/em><\/p>\n<p style=\"padding-left: 30px;\">Where will these technologies take us next? Well to know that we should determine what&#8217;s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.<br \/>\n&#8230;<\/p>\n<p style=\"padding-left: 30px;\">&#8220;The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.\u00a0 That&#8217;s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you&#8217;d be terrified.&#8221;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/formtek.com\/blog\/algorithmic-economy-powering-the-machine-to-machine-age-economic-revolution\/\" target=\"_blank\"><strong>Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution<\/strong><\/a> &#8212; from formtek.com by Dick Weisinger<\/p>\n<p><em>Excerpts:<\/em><\/p>\n<p style=\"padding-left: 30px;\">As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.\u00a0 Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.\u00a0 Algorithms are defining the future of business and even our everyday lives.<br \/>\n&#8230;<br \/>\nSondergaard said that \u201cin 2020, consumers won\u2019t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.\u00a0 The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.\u201d<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><em><span style=\"color: #800000;\">Related items:<\/span><\/em><\/p>\n<ul>\n<li><a href=\"http:\/\/www.techworld.com\/big-data\/how-ai-is-fuelling-car-industry-3629072\/\" target=\"_blank\"><strong>Toyota, Tesla or Google: who&#8217;s spending the most on artificial intelligence in the car industry?<\/strong><\/a> &#8212; from techworld.com by Margi Murphy<br \/>\n<em>Which businesses are investing in artificial intelligence? Read how artificial intelligence will change the car industry, and other sectors, as we know it.<\/em><\/li>\n<li><a href=\"https:\/\/wtvox.com\/robotics\/toyota-invests-1-billion-in-artificial-intelligence-in-us\/\" target=\"_blank\"><strong>Toyota Invests $1 Billion In Artificial Intelligence In US<\/strong><\/a> &#8212; from wtvox.com \/ Associated Press<\/li>\n<li><a href=\"https:\/\/wtvox.com\/emerging-tech\/top-10-emerging-technologies\/\" target=\"_blank\"><strong>Top 10 Emerging Technologies That Could Transform Our Future<\/strong><\/a> &#8212; from wtvox.com by E Aston<\/li>\n<li><a href=\"http:\/\/techcrunch.com\/2015\/11\/15\/what-technology-will-look-like-in-five-years\/\" target=\"_blank\"><strong>What Technology Will Look Like In Five Years<\/strong> <\/a>&#8212; from techcrunch.com by Diomedes Kastanis<\/li>\n<li><a href=\"http:\/\/www.fastcompany.com\/3052125\/innovation-agents\/welcome-to-brain-sciences-next-frontier-virtual-reality\" target=\"_blank\"><strong>Welcome To Brain Science&#8217;s Next Frontier: Virtual Reality<\/strong><\/a> &#8212; from fastcompany.com by Tina Amirtha<br \/>\n<em>Amy Robinson, executive director at the startup EyeWire, is making neuroscience into a playground for the hot tech du jour.<\/em><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\"><em>Addendums:<\/em><\/span><\/p>\n<ul>\n<li><strong><a href=\"https:\/\/www.abiresearch.com\/press\/abi-research-shows-augmented-reality-rise-total-ma\/\" target=\"_blank\">ABI Research Shows Augmented Reality on the Rise with Total Market Worth to Reach $100 Billion by 2020<\/a><\/strong>\u00a0&#8212; from abiresearch.com<\/li>\n<li><a href=\"http:\/\/www.coinspeaker.com\/2015\/11\/14\/self-driving-cars-will-dominate-the-roads-by-2030-says-internet-of-things-visionary\/\" target=\"_blank\"><strong>\u2018Self-Driving Cars Will Dominate the Roads by 2030,\u2019 Says Internet of Things Visionary<\/strong><\/a> &#8212; from coinspeaker.com by <span class=\"article-author\">Eugenia Romanenko<\/span><\/li>\n<li><a href=\"http:\/\/qz.com\/559432\/robots-are-learning-to-say-no-to-human-orders-and-your-life-may-depend-on-it\/\" target=\"_blank\"><strong>Robots are learning to say \u201cno\u201d to human orders<\/strong><\/a> &#8212; from quartz.com by Kit Eaton<br \/>\n<em>Excerpt:<\/em><br \/>\nIt may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But <a href=\"http:\/\/spectrum.ieee.org\/automaton\/robotics\/artificial-intelligence\/researchers-teaching-robots-how-to-best-reject-orders-from-humans\">researchers in Massachusetts are trying something<\/a> that many a science fiction movie has already anticipated: They\u2019re teaching robots to say \u201cno\u201d to some instructions. For robots wielding potentially dangerous-to-humans tools on a car production line, it\u2019s pretty clear that the robot should always precisely follow its programming. But we\u2019re building more-clever robots every day and we\u2019re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they\u2019re wrong or dangerous to either a human or itself? This is what researchers at Tufts University\u2019s Human-Robot Interaction Lab are tackling, and they\u2019ve come up with at least one strategy for intelligently rejecting human orders.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><a href=\"http:\/\/qz.com\/559432\/robots-are-learning-to-say-no-to-human-orders-and-your-life-may-depend-on-it\/\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-52598\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/robots-saying-no.jpg\" alt=\"robots-saying-no\" width=\"437\" height=\"370\" srcset=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/robots-saying-no.jpg 437w, http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2015\/11\/robots-saying-no-150x127.jpg 150w\" sizes=\"auto, (max-width: 437px) 100vw, 437px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"color: #800000;\"><em>Addendum on 12\/14\/15:<\/em><\/span><\/p>\n<ul>\n<li><strong><a href=\"http:\/\/qz.com\/564269\/algorithms-rule-our-lives-so-who-should-rule-them\/\" target=\"_blank\">Algorithms rule our lives, so who should rule them?<\/a><\/strong>\u00a0&#8212; from qz.com by Dries Buytaert<br \/>\n<em>As technology advances and more everyday objects are driven almost entirely by software, it\u2019s become clear that we need a better way to catch cheating software and keep people safe.<\/em><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>From DSC: This posting is meant to surface the need for debates\/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.\u00a0 Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[113,329,435,356,314,224,318,387,473,86,498,45,72,403,287,37,35,95,500,391,419,458,74,285,511,353,509,44,201,330,204,101,486,248,437,480,293,20,212,460,454,206,38,321,253,299],"tags":[],"class_list":["post-52368","post","type-post","status-publish","format-standard","hentry","category-21st-century","category-24x7x365-access","category-analytics","category-artificial-intelligence-agents-llms-and-related","category-asia","category-augmented-reality","category-australia","category-business","category-canada","category-change","category-communities-of-practice","category-computer-science","category-daniel-s-christian","category-ethics","category-europe","category-future","category-game-changing-environment","category-global-globalization","category-hearts-matters-of-the-heart","category-human-computer-interaction-hci","category-ideas-teaching","category-india","category-leadership","category-legislation-legislatures","category-machine-to-machine-communications","category-moralsvalues","category-near-field-communication-nfc","category-pace-of-change","category-policy","category-political-science","category-programming","category-psychology","category-real-world-assignments","category-relationships","category-robotics","category-society","category-sociology","category-strategy","category-surviving","category-technology-general","category-the-downsides-of-technology","category-trends","category-uk","category-united-states","category-virtual-reality-worlds-learning","category-workplace"],"_links":{"self":[{"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/52368","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/comments?post=52368"}],"version-history":[{"count":40,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/52368\/revisions"}],"predecessor-version":[{"id":52680,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/52368\/revisions\/52680"}],"wp:attachment":[{"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/media?parent=52368"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/categories?post=52368"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/tags?post=52368"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}