From DSC:
This type of technology could be good, or it could be bad…or, like many technologies, it could be both — depends upon how it’s used. The resources below mention some positive applications, but also some troubling applications.


 

Lyrebird claims it can recreate any voice using just one minute of sample audio — from theverge.com by James Vincent
The results aren’t 100 percent convincing, but it’s a sign of things to come

Excerpt:

Artificial intelligence is making human speech as malleable and replicable as pixels. Today, a Canadian AI startup named Lyrebird unveiled its first product: a set of algorithms the company claims can clone anyone’s voice by listening to just a single minute of sample audio.

 

 

 

 

 

Also see:

 

Imitating people’s speech patterns precisely could bring trouble — from economist.com by
You took the words right out of my mouth

Excerpt:

UTTER 160 or so French or English phrases into a phone app developed by CandyVoice, a new Parisian company, and the app’s software will reassemble tiny slices of those sounds to enunciate, in a plausible simulacrum of your own dulcet tones, whatever typed words it is subsequently fed. In effect, the app has cloned your voice. The result still sounds a little synthetic but CandyVoice’s boss, Jean-Luc Crébouw, reckons advances in the firm’s algorithms will render it increasingly natural. Similar software for English and four widely spoken Indian languages, developed under the name of Festvox, by Carnegie Mellon University’s Language Technologies Institute, is also available. And Baidu, a Chinese internet giant, says it has software that needs only 50 sentences to simulate a person’s voice.

Until recently, voice cloning—or voice banking, as it was then known—was a bespoke industry which served those at risk of losing the power of speech to cancer or surgery.

More troubling, any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer.

 

 

Per Candyvoice.com:

Expert in digital voice processing, CandyVoice offers software to facilitate and improve vocal communication between people and communicating objects. With applications in:

Health
Customize your devices of augmentative and alternative vocal communication by integrating in them your users’ personal vocal model

Robots & Communicating objects
Improve communication with robots through voice conversion, customized TTS, and noise filtering

Video games
Enhance the gaming experience by integrating vocal conversion of character’s voice in real time, and the TTS customizing

 

 

Also related:

 

 

From DSC:
Given this type of technology, what’s to keep someone from cloning a voice, putting together whatever you wanted that person to say, and then making it appear that Alexa recorded that other person’s voice?

 

 

 

 

From DSC:
The recent pieces below made me once again reflect on the massive changes that are quickly approaching — and in some cases are already here — for a variety of nations throughout the world.

They caused me to reflect on:

  • What the potential ramifications for higher education might be regarding these changes that are just starting to take place in the workplace due to artificial intelligence (i.e., the increasing use of algorithms, machine learning, and deep learning, etc.), automation, & robotics?
  • The need for people to reinvent themselves quickly throughout their careers (if we can still call them careers)
  • How should we, as a nation, prepare for these massive changes so that there isn’t civil unrest due to soaring inequality and unemployment?

As found in the April 9th, 2017 edition of our local newspaper here:

When even our local newspaper is picking up on this trend, you know it is real and has some significance to it.

 

Then, as I was listening to the radio a day or two after seeing the above article, I heard of another related piece on NPR.  NPR is having a journalist travel across the country, trying to identify “robot-safe” jobs.  Here’s the feature on this from MarketPlace.org

 

 

What changes do institutions of traditional higher education
immediately need to begin planning for? Initiating?

What changes should be planned for and begin to be initiated
in the way(s) that we accredit new programs?

 

 

Keywords/ideas that come to my mind:

  • Change — to society, to people, to higher ed, to the workplace
  • Pace of technological change — no longer linear, but exponential
  • Career development
  • Staying relevant — as institutions, as individuals in the workplace
  • Reinventing ourselves over time — and having to do so quickly
  • Adapting, being nimble, willing to innovate — as institutions, as individuals
  • Game-changing environment
  • Lifelong learning — higher ed needs to put more emphasis on microlearning, heutagogy, and delivering constant/up-to-date streams of content and learning experiences. This could happen via the addition/use of smaller learning hubs, some even makeshift learning hubs that are taking place at locations that these institutions don’t even own…like your local Starbucks.
  • If we don’t get this right, there could be major civil unrest as inequality and unemployment soar
  • Traditional institutions of higher education have not been nearly as responsive to change as they have needed to be; this opens the door to alternatives. There’s a limited (and closing) window of time left to become more nimble and responsive before these alternatives majorly disrupt the current world of higher education.

 

 

 



Addendum from the corporate world (emphasis DSC):



 

From The Impact 2017 Conference:

The Role of HR in the Future of Work – A Town Hall

  • Josh Bersin, Principal and Founder, Bersin by Deloitte, Deloitte Consulting LLP
  • Nicola Vogel, Global Senior HR Director, Danfoss
  • Frank Møllerop, Chief Executive Officer, Questback
  • David Mallon, Head of Research, Bersin by Deloitte, Deloitte Consulting LLP

Massive changes spurred by new technologies such as artificial intelligence, mobile platforms, sensors and social collaboration have revolutionized the way we live, work and communicate – and the pace is only accelerating. Robots and cognitive technologies are making steady advances, particularly in jobs and tasks that follow set, standardized rules and logic. This reinforces a critical challenge for business and HR leaders—namely, the need to design, source, and manage the future of work.

In this Town Hall, we will discuss the role HR can play in leading the digital transformation that is shaping the future of work in organizations worldwide. We will explore the changes we see taking place in three areas:

  • Digital workforce: How can organizations drive new management practices, a culture of innovation and sharing, and a set of talent practices that facilitate a new network-based organization?
  • Digital workplace: How can organizations design a working environment that enables productivity; uses modern communication tools (such as Slack, Workplace by Facebook, Microsoft Teams, and many others); and promotes engagement, wellness, and a sense of purpose?
  • Digital HR: How can organizations change the HR function itself to operate in a digital way, use digital tools and apps to deliver solutions, and continuously experiment and innovate?
 

The Enterprise Gets Smart
Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.

Excerpt:

Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.

“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini  says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”

 

 

 

Asilomar AI Principles

These principles were developed in conjunction with the 2017 Asilomar conference (videos here), through the process described here.

 

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Research Issues

 

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

 

6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14) Shared Benefit: AI technologies should benefit and empower as many people as possible.

15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

 

19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

 

 

Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?

  • Talks:
    • Demis Hassabis (DeepMind)
    • Ray Kurzweil (Google) (video)
    • Yann LeCun (Facebook/NYU) (pdf) (video)
  • Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
  • Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
    • Talks:
      • Shane Legg (DeepMind)
      • Nick Bostrom (Oxford) (pdf) (video)
      • Jaan Tallinn (CSER/FLI) (pdf) (video)
    • Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
    • Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
  • Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
    • Talks:
      • Matt Scherer (pdf) (video)
      • Heather Roff-Perkins (Oxford)
    • Panel with Martin Rees (CSER/Cambridge), Heather Roff-Perkins, Jason Matheny (IARPA), Steve Goose (HRW), Irakli Beridze (UNICRI), Rao Kambhampati (AAAI, ASU), Anthony Romero (ACLU): Policy & Governance (video)
    • Panel with Kate Crawford (Microsoft/MIT), Matt Scherer, Ryan Calo (U. Washington), Kent Walker (Google), Sam Altman (OpenAI): AI & Law (video)
    • Panel with Kay Firth-Butterfield (IEEE, Austin-AI), Wendell Wallach (Yale), Francesca Rossi (IBM/Padova), Huw Price (Cambridge, CFI), Margaret Boden (Sussex): AI & Ethics (video)

 

 

 

Infected Vending Machines And Light Bulbs DDoS A University — from forbes.com by Lee Mathews; with a shout out to eduwire for this resource

Excerpt:

IoT devices have become a favorite weapon of cybercriminals. Their generally substandard security — and the sheer numbers of connected devices — make them an enticing target. We’ve seen what a massive IoT botnet is capable of doing, but even a relatively small one can cause a significant amount of trouble.

A few thousand infected IoT devices can cut a university off from the Internet, according to an incident that the Verizon RISK (Research, Investigations, Solutions and Knowledge) team was asked to assist with. All the attacker had to do was re-program the devices so they would periodically try to connect to seafood-related websites.

How can that simple act grind Internet access to a halt across an entire university network? By training around 5,000 devices to send DNS queries simultaneously…

 

 

Hackers Use New Tactic at Austrian Hotel: Locking the Doors — from nytimes.com by Dan Bilefskyjan

Excerpt:

The ransom demand arrived one recent morning by email, after about a dozen guests were locked out of their rooms at the lakeside Alpine hotel in Austria.

The electronic key system at the picturesque Romantik Seehotel Jaegerwirt had been infiltrated, and the hotel was locked out of its own computer system, leaving guests stranded in the lobby, causing confusion and panic.

“Good morning?” the email began, according to the hotel’s managing director, Christoph Brandstaetter. It went on to demand a ransom of two Bitcoins, or about $1,800, and warned that the cost would double if the hotel did not comply with the demand by the end of the day, Jan. 22.

Mr. Brandstaetter said the email included details of a “Bitcoin wallet” — the account in which to deposit the money — and ended with the words, “Have a nice day!”

 

“Ransomware is becoming a pandemic,” said Tony Neate, a former British police officer who investigated cybercrime for 15 years. “With the internet, anything can be switched on and off, from computers to cameras to baby monitors.”

 

To guard against future attacks, however, he said the Romantik Seehotel Jaegerwirt was considering replacing its electronic keys with old-fashioned door locks and real keys of the type used when his great-grandfather founded the hotel. “The securest way not to get hacked,” he said, “is to be offline and to use keys.”

 

 

 

Regulation of the Internet of Things — from schneier.com by Bruce Schneier

Excerpt (emphasis DSC):

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

 

 

From DSC:
We have to do something about these security-related issues — now!  If not, you can kiss the Internet of Things goodbye — or at least I sure hope so. Don’t get me wrong. I’d like to the the Internet of Things come to fruition in many areas. However, if governments and law enforcement agencies aren’t going to get involved to fix the problems, I don’t want to see the Internet of Things take off.  The consequences of not getting this right are too huge — with costly ramifications.  As Bruce mentions in his article, it will likely take government regulation before this type of issue goes away.

 

 

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Bruce Schneier

 

 

 



Addendum on 2/15/17:

I was glad to learn of the following news today:

  • NXP Unveils Secure Platform Solution for the IoT — from finance.yahoo.com
    Excerpt:
    SAN FRANCISCO, Feb. 13, 2017 (GLOBE NEWSWIRE) — RSA Conference 2017 – Electronic security and trust are key concerns in the digital era, which are magnified as everything becomes connected in the Internet of Things (IoT). NXP Semiconductors N.V. (NXPI) today disclosed details of a secure platform for building trusted connected products. The QorIQ Layerscape Secure Platform, built on the NXP trust architecture technology, enables developers of IoT equipment to easily build secure and trusted systems. The platform provides a complete set of hardware, software and process capabilities to embed security and trust into every aspect of a product’s life cycle.Recent security breaches show that even mundane devices like web-cameras or set-top boxes can be used to both attack the Internet infrastructure and/or spy on their owners. IoT solutions cannot be secured against such misuse unless they are built on technology that addresses all aspects of a secure and trusted product lifecycle. In offering the Layerscape Secure Platform, NXP leverages decades of experience supplying secure embedded systems for military, aerospace, and industrial markets.

 

 

Code-Dependent: Pros and Cons of the Algorithm Age — from pewinternet.org by Lee Rainie and Janna Anderson
Algorithms are aimed at optimizing everything. They can save lives, make things easier and conquer chaos. Still, experts worry they can also put too much control in the hands of corporations and governments, perpetuate bias, create filter bubbles, cut choices, creativity and serendipity, and could result in greater unemployment

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms. Smartphone apps are nothing but algorithms. Computer and video games are algorithmic storytelling. Online dating and book-recommendation and travel websites would not function without algorithms. GPS mapping systems get people from point A to point B via algorithms. Artificial intelligence (AI) is naught but algorithms. The material people see on social media is brought to them by algorithms. In fact, everything people see and do on the web is a product of algorithms. Every time someone sorts a column in a spreadsheet, algorithms are at play, and most financial transactions today are accomplished by algorithms. Algorithms help gadgets respond to voice commands, recognize faces, sort photos and build and drive cars. Hacking, cyberattacks and cryptographic code-breaking exploit algorithms. Self-learning and self-programming algorithms are now emerging, so it is possible that in the future algorithms will write many if not most algorithms.

Algorithms are often elegant and incredibly useful tools used to accomplish tasks. They are mostly invisible aids, augmenting human lives in increasingly incredible ways. However, sometimes the application of algorithms created with good intentions leads to unintended consequences. Recent news items tie to these concerns…

 

The use of algorithms is spreading as massive amounts of data are being created, captured and analyzed by businesses and governments. Some are calling this the Age of Algorithms and predicting that the future of algorithms is tied to machine learning and deep learning that will get better and better at an ever-faster pace.

 

 

 

 

 

 

A massive AI partnership is tapping civil rights and economic experts to keep AI safe — from qz.com by Dave Gershgorn

Excerpt:

When the Partnership on Artificial Intelligence to Benefit People and Society was announced in September, it was with the stated goal of educating the public on artificial intelligence, studying AI’s potential impact on the world, and establishing industry best practices. Now, how those goals will actually be achieved is becoming clearer.

This week, the Partnership brought on new members that include representatives from the American Civil Liberties Union, the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University, and the University of California, Berkeley.

The organizations themselves are not officially affiliated yet—that process is still underway—but the Partnership’s board selected these candidates based on their expertise in civil rights, economics, and open research, according to interim co-chair Eric Horvitz, who is also director of Microsoft Research. The Partnership also added Apple as a “founding member,” putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board.

 

 


Also relevant/see:

Building Public Policy To Address Artificial Intelligence’s Impact — from blogs.wsj.com by Irving Wladawsky-Berger

Excerpt:

Artificial intelligence may be at a tipping point, but it’s not immune to backlash from users in the event of system mistakes or a failure to meet heightened expectations. As AI becomes increasingly used for more critical tasks, care needs to be taken by proponents to avoid unfulfilled promises as well as efforts that appear to discriminate against certain segments of society.

Two years ago, Stanford University launched the One Hundred Year Study of AI to address “how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.” One of its key missions is to convene a Study Panel of experts every five years to assess the then current state of the field, as well as to explore both the technical advances and societal challenges over the next 10 to 15 years.

The first such Study Panel recently published Artificial Intelligence and Life in 2030, a report that examined the likely impact of AI on a typical North American city by the year 2030.

 

 

Virtual reality is actually here — from computerworld.in by Bart Perkins

Excerpts:

In parallel with gaming, VR is expanding into many other areas, including these:

  • Healthcare
    Surgical Theater is working with UCLA, New York University, the Mayo Clinic and other major medical centers to use VR to help surgeons prepare for difficult operations. Virtual 3D models are constructed from MRIs, CAT scans and/or ultrasounds.
  • Mental health
    Meditation promotes mental health by reducing stress and anxiety.
  • Education
    Unimersiv is focusing on historical sites, creating a series of VR tours for the Colosseum, Acropolis, Parthenon, Stonehenge, Titanic, etc. These tours allow each site to be explored as it existed when it was built. Additional locations’ virtual sites and attractions will undoubtedly be added in the near future. The British Museum offered a Virtual Reality Weekend in August 2015. Visitors were able to explore a Bronze Age roundhouse with a flickering fire and changing levels of light while they “handled” Bronze Age relics. The American Museum of Natural History allows students anywhere in the world to take virtual tours of selected museum exhibits, and other museums will soon follow.
  • Training
    Virtual reality is an excellent tool when the task is dangerous or the equipment involved is expensive.
  • Crime reconstruction
  • Architecture
  • Collaboration
    Virtual reality, augmented reality and mixed reality will form the basis for the next set of collaboration tools.

 

 

 

VR and education: Why we shouldn’t wait to reap the benefits – from medium.com by Josh Maldonad

Excerpts:

However, we see very little experienced-based learning in all levels of education today. Traditional learning consists of little more than oration through lectures and textbooks (and their digital equivalents). Experience-based learning is often very difficult to facilitate in the classroom. Whether it be a field trip in elementary school, or simulation exercises in med school, it can be tedious, costly and time consuming.

Where VR is really winning in education is in subject matter retention. The first of several surveys that we’ve done was based on a VR field trip through the circulatory system with high-school age children. We saw an increase of nearly 80% in subject matter retention from a group that used VR, compared against a control group that was provided the same subject matter via text and image. (I’ll expand on the details of this experiment, and some research initiatives we’re working on in another blog post).

http://uploadvr.com/chinese-vr-education-study/

Example apps in healthcare:

  • Emergency response and Triage Decision making
  • Nursing fundamentals, safety and communication procedures
  • Anesthesiology: patient monitoring and dosage delivery

 

 

Residential design and virtual reality: a better way to build a home? — from connectedlife.style

Excerpt:

The old phrase of ‘needing to see it to believe it’ is a powerful mantra across all aspects of residential design. Architecture, interior design and property development are all highly visual trades that require buy-in from both those working on the project and the client. As such, making sure everyone is sold on a coherent vision is vital to ensure that everything goes smoothly and no one is left dissatisfied when the project is completed.

 

 

 

Google Translate: Updated
For those travelers out there, you might want to know about Google Translate’s ability to read in an image of one language, and provide you with a translation of that language/signage/label/etc.

Also see:

 

From this page, here are some of the visual translation products:

 

 

Now HoloLens lets you check your mail in a wall-sized mixed reality version of Outlook — from pcworld.com by Ian Paul
Now you can check your email or make a calendar appointment without removing Microsoft’s augmented reality headset.

hololens multiple flat apps

You now can pin multiple 2D apps in virtual space,
and Microsoft’s HoloLens will remember where they are.

 

 

VR in Education: What’s Already Happening in the Classroom — from arvrmagazine.com by Susanne Krause
“Engagement was off the charts”  | Connecting to the world and creating new ones using virtual reality

Excerpt:

It’s a way for educators to bring their students to places that would be out of reach otherwise. Google Expeditions, the VR mode of Google Street View and Nearpod’s virtual field trips are among the most popular experiences teachers explore with their students. “Some of our students have never really left the bubbles of their own town”, says Jaime Donally, creator of the #ARVRinEDU chat on Twitter. “Virtual reality is a relatively inexpensive way to show them the world.”

 

 

How augmented reality is transforming building management — from ibm.com
IBM People for Smarter Cities presents “Dublin lab – Cognitive Buildings”

In the video below, a facilities manager is using a mobile device to scan a QR code on a wall, behind which is a critical piece of HVAC equipment. With one scan, we can view data on the asset’s performance and health, location data for the asset. This data is being pulled by the IoT Platform from the asset itself, TRIRIGA, and any other useful sources.

 

 

 

 

 

 

Excerpt:

But the best experiences, VR acolytes agree, are still yet to come. Resh Sidhu leads VR development for Framestore, the high-end visual effects house that won an Oscar for the movie Gravity, and has since expanded into creating VR content. With hardware finally delivering on its promise, she believes it is now up to creatives to explore the possibilities.

 

 

HTC Brings VR Center to Paris; Vive Exhibit at Nobel Museum — from vrscout.com by Jonathan Nafarrete

Excerpt:

There’s so much more to VR than just gaming. Which is probably why HTC has been exploring entirely new ways to bring VR to art, education and culture — starting with museums around the world.

HTC recently collaborated with TIME-LIFE on “Remembering Pearl Harbor,” a VR experience commemorating the 75th anniversary of the attack with exhibitions at the Intrepid Sea, Air and Space Museum in New York City and the Newsuem in Washington D.C. Last month, Vive also collaborated with the Royal Academy of Arts in London on the world’s first 3-D printed VR art exhibit.

Now HTC Vive has revealed the launch of a new VR center at La Geode, part of Paris’ Science and Industry Museum, as well as a partnership with the Nobel Museum for a first-of-its-kind VR exhibit showcasing the contributions of Nobel laureates.

 

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

A school bus, virtual reality, & an out-of-this-world journey — from goodmenproject.com
“Field Trip To Mars” is the barrier-shattering outcome of an ambitious mission to give a busload of people the same, Virtual Reality experience – going to Mars.

Excerpt:

Inspiration was Lockheed‘s goal when it asked its creative resources, led by McCann, to create the world’s first mobile group Virtual Reality experience. As one creator notes, VR now is essentially a private, isolating experience. But wouldn’t it be cool to give a busload of people the same, simultaneous VR experience? And then – just to make it really challenging – put the whole thing on wheels?

“Field Trip To Mars” is the barrier-shattering outcome of this ambitious mission.

 

From DSC:
This is incredible! Very well done. The visual experience tracks the corresponding speeds of the bus and even turns of the bus.

 

 

 

lockheed-fieldtriptomarsfall2016

 

 

Ed Dept. Launches $680,000 Augmented and Virtual Reality Challenge — from thejournal.com by David Nagel

Excerpt:

The United States Department of Education (ED) has formally kicked off a new competition designed to encourage the development of virtual and augmented reality concepts for education.

Dubbed the EdSim Challenge, the competition is aimed squarely at developing students’ career and technical skills — it’s funded through the Carl D. Perkins Career and Technical Education Act of 2006 — and calls on developers and ed tech organizations to develop concepts for “computer-generated virtual and augmented reality educational experiences that combine existing and future technologies with skill-building content and assessment. Collaboration is encouraged among the developer community to make aspects of simulations available through open source licenses and low-cost shareable components. ED is most interested in simulations that pair the engagement of commercial games with educational content that transfers academic, technical, and employability skills.”

 

 

 

Virtual reality boosts students’ results — from raconteur.net b
Virtual and augmented reality can enable teaching and training in situations which would otherwise be too hazardous, costly or even impossible in the real world

Excerpt:

More recently, though, the concept described in Aristotle’s Nicomachean Ethics has been bolstered by further scientific evidence. Last year, a University of Chicago study found that students who physically experience scientific concepts, such as the angular momentum acting on a bicycle wheel spinning on an axel that they’re holding, understand them more deeply and also achieve significantly improved scores in tests.

 

 

 

 

 

 

 

Virtual and augmented reality are shaking up sectors — from raconteur.net by Sophie Charara
Both virtual and augmented reality have huge potential to leap from visual entertainment to transform the industrial and service sectors

 

 

 

 

Microsoft’s HoloLens could power tanks on a battlefield — from theverge.com by Tom Warren

Excerpt:

Microsoft might not have envisioned its HoloLens headset as a war helmet, but that’s not stopping Ukrainian company LimpidArmor from experimenting. Defence Blog reports that LimpidArmor has started testing military equipment that includes a helmet with Microsoft’s HoloLens headset integrated into it.

The helmet is designed for tank commanders to use alongside a Circular Review System (CRS) of cameras located on the sides of armored vehicles. Microsoft’s HoloLens gathers feeds from the cameras outside to display them in the headset as a full 360-degree view. The system even includes automatic target tracking, and the ability to highlight enemy and allied soldiers and positions.

 

 

 

Bring your VR to work — from itproportal.com by Timo Elliott, Josh Waddell 4 hours ago
With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer.

Excerpt:

With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer — and that’s a blind spot that companies and CIOs can’t afford to have. It hasn’t been that long since consumer demand for the iPhone and iPad forced companies, grumbling all the way, into finding business cases for them. Gartner has said that the next five to ten years will bring “transparently immersive experiences” to the workplace. They believe this will introduce “more transparency between people, businesses, and things” and help make technology “more adaptive, contextual, and fluid.”

If digitally enhanced reality generates even half as much consumer enthusiasm as smartphones and tablets, you can expect to see a new wave of consumerisation of IT as employees who have embraced VR and AR at home insist on bringing it to the workplace. This wave of consumerisation could have an even greater impact than the last one. Rather than risk being blindsided for a second time, organisations would be well advised to take a proactive approach and be ready with potential business uses for VR and AR technologies by the time they invade the enterprise.

 

In Gartner’s latest emerging technologies hype cycle, Virtual Reality is already on the Slope of Enlightenment, with Augmented Reality following closely.

 

 

 

VR’s higher-ed adoption starts with student creation — from edsurge.com by George Lorenzo

Excerpt:

One place where students are literally immersed in VR is at Carnegie Mellon University’s Entertainment Technology Center (ETC). ETC offers a two-year Master of Entertainment Technology program (MET) launched in 1998 and cofounded by the late Randy Pausch, author of “The Last Lecture.”

MET starts with an intense boot camp called the “immersion semester” in which students take a Building Virtual Worlds (BVW) course, a leadership course, along with courses in improvisational acting, and visual storytelling. Pioneered by Pausch, BVW challenges students in small teams to create virtual reality worlds quickly over a period of two weeks, culminating in a presentation festival every December.

 

 

Apple patents augmented reality mapping system for iPhone — from appleinsider.com by Mikey Campbell
Apple on Tuesday was granted a patent detailing an augmented reality mapping system that harnesses iPhone hardware to overlay visual enhancements onto live video, lending credence to recent rumors suggesting the company plans to implement an iOS-based AR strategy in the near future.

 

 

A bug in the matrix: virtual reality will change our lives. But will it also harm us? — from theguardian.stfi.re
Prejudice, harassment and hate speech have crept from the real world into the digital realm. For virtual reality to succeed, it will have to tackle this from the start

 

 

 

The latest Disney Research innovation lets you feel the rain in virtual reality — from haptic.al by Deniz Ergurel

Excerpt:

Virtual reality is a combination of life-like images, effects and sounds that creates an imaginary world in front of our eyes.

But what if we could also imitate more complex sensations like the feeling of falling rain, a beating heart or a cat walking? What if we could distinguish, between a light sprinkle and a heavy downpour in a virtual experience?

Disney Research?—?a network of research laboratories supporting The Walt Disney Company, has announced the development of a 360-degree virtual reality application offering a library of feel effects and full body sensations.

 

 

Relive unforgettable moments in history through Timelooper APP. | Virtual reality on your smartphone.

 

timelooper-nov2016

 

 

Literature class meets virtual reality — from blog.cospaces.io by Susanne Krause
Not every student finds it easy to let a novel come to life in their imagination. Could virtual reality help? Tiffany Capers gave it a try: She let her 7th graders build settings from Lois Lowry’s “The Giver” with CoSpaces and explore them in virtual reality. And: they loved it.

 

 

 

 

learningvocabinvr-nov2016

 

 

 

James Bay students learn Cree syllabics in virtual reality — from cbc.ca by Celina Wapachee and Jaime Little
New program teaches syllabics inside immersive world, with friendly dogs and archery

 

 

 

VRMark will tell you if your PC is ready for Virtual Reality — from engadget.com by Sean Buckley
Benchmark before you buy.

 

 

Forbidden City Brings Archaeology to Life With Virtual Reality — from wsj.com

 

 

holo.study

hololensdemos-nov2016

 

 

Will virtual reality change the way I see history? — from bbc.co.uk

 

 

 

Scientists can now explore cells in virtual reality — from mashable.com by Ariel Bogle

Excerpt:

After generations of peering into a microscope to examine cells, scientists could simply stroll straight through one.

Calling his project the “stuff of science fiction,” director of the 3D Visualisation Aesthetics Lab at the University of New South Wales (UNSW) John McGhee is letting people come face-to-face with a breast cancer cell.

 

 

 

 

Can Virtual Reality Make Us Care More? — from huffingtonpost.co.uk by Alex Handy

Excerpt:

In contrast, VR has been described as the “ultimate empathy machine.” It gives us a way to virtually put us in someone else’s shoes and experience the world the way they do.

 

 

 

Stanford researchers release virtual reality simulation that transports users to ocean of the future — from news.stanford.edu by Rob Jordan
Free science education software, available to anyone with virtual reality gear, holds promise for spreading awareness and inspiring action on the pressing issue of ocean acidification.

 

 

 

 

The High-end VR Room of the Future Looks Like This — from uploadvr.com by Sarah Downey

Excerpt:

This isn’t meant to be an exhaustive list, but if I missed something major, please tell me and I’ll add it. Also, please reach out if you’re working on anything cool in this space à sarah(at)accomplice(dot)co.

Hand and finger tracking, gesture interfaces, and grip simulation:

AR and VR viewers:

Omnidirectional treadmills:

Haptic feedback bodysuits:

Brain-computer interfaces:

Neural plugins:

  • The Matrix (film)
  • Sword Art Online (TV show)
  • Neuromancer (novel)
  • Total Recall (film)
  • Avatar (film)

3D tracking, capture, and/or rendering:

Eye tracking:

 VR audio:

Scent creation:

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Preparing for the future of Artificial Intelligence
Executive Office of the President
National Science & Technology Council
Committee on Technology
October 2016

preparingfor-futureai-usgov-oct2016

Excerpt:

As a contribution toward preparing the United States for a future in which AI plays a growing role, this report surveys the current state of AI, its existing and potential applications, and the questions that are raised for society and public policy by progress in AI. The report also makes recommendations for specific further action s by Federal agencies and other actors. A companion document lays out a strategic plan for Federally-funded research and development in AI. Additionally, in the coming months, the Administration will release a follow-on report exploring in greater depth the effect of AI-driven automation on jobs and the economy.

The report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016 to foster interagency coordination, to provide technical and policy advice on topics related to AI, and to monitor the development of AI technologies across industry, the research community, and the Federal Government. The report was reviewed by the NSTC Committee on Technology, which concurred with its contents. The report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five public workshops co-hosted with universities and other associations that are referenced in this report.

In the coming years, AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.

 

 

 

Our latest way to bring your government to you — from whitehouse.gov
Why we’re open-sourcing the code for the first-ever government bot on Facebook Messenger.

 

botgif

Excerpt:

On August 26th, President Obama publicly responded to a Facebook message sent to him by a citizen—a first for any president in history. Since then, he has received over one and a half million Facebook messages, sent from people based all around the world.

While receiving messages from the public isn’t a recent phenomenon—every day, the White House receives thousands of phone calls, physical letters, and submissions through our online contact form—being able to contact the President through Facebook has never been possible before. Today [10/14/16], it’s able to happen because of the first-ever government bot on Facebook messenger.

 

 

Also see:

 

 

 

Education Department Strips Authority of Largest For-Profit Accreditor — from usnews.com by Lauren Camera
The potential death blow follows intense federal scrutiny of the Accrediting Council for Independent Colleges and Schools.

Excerpt:

The Department of Education officially stripped the Accrediting Council for Independent Colleges and Schools – the largest accrediting agency of for-profit colleges and universities – of its authority Thursday, handing down the final blow in a long controversy over the council’s ability to be an effective watchdog for students and billions of taxpayer dollars.

“I am terminating the department’s recognition of ACICS as a national recognized accrediting agency,” Emma Vadehra, chief of staff to the education secretary, wrote in a letter to the organization. “ACICS’s track record does not inspire confidence that it can address all of the problems effectively.”

The decision comes after a federal panel voted to shut ACICS down in June amid intense criticism of the council for its loose oversight of educational institutions. ACICS was the accrediting agency for now-shuttered Corinthian Colleges and ITT Technical Institute campuses.

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

Public Input and Next Steps on the Future of Artificial Intelligence — from whitehouse.gov by Ed Felten and Terah Lyons

Summary:

Today, OSTP is releasing public comments on AI, sharing insights from events across the country, and announcing a new White House event on AI this fall.

 

 

See also:

whitehouseai-sept2016

 
© 2025 | Daniel Christian