Research study suggests VR can have a huge impact in the classroom — from uploadvr.com by Joe Durbin

Excerpt:

“Every child is a genius in his or her own way. VR can be the key to awakening the genius inside.”

This is the closing line of a new research study currently making its way out of China. Conducted by Beijing Bluefocus E-Commerce Co., Ltd and Beijing iBokan Wisdom Mobile Internet Technology Training Institution, the study takes a detailed look at the different ways virtual reality can make public education more effective.

 

“Compared with traditional education, VR-based education is of obvious advantage in theoretical knowledge teaching as well as practical skills training. In theoretical knowledge teaching, it boasts the ability to make abstract problems concrete, and theoretical thinking well-supported. In practical skills training, it helps sharpen students’ operational skills, provides an immersive learning experience, and enhances students’ sense of involvement in class, making learning more fun, more secure, and more active,” the study states.

 

 

VR for Education – what was and what is — from researchvr.podigee.io

Topics discussed:

  • VR for education: one time use vs everyday use
  • Ecological Validity of VR Research
  • AR definition & history
  • Tethered vs untethered
  • Intelligent Ontology-driven Games for Teaching Human Anatomy
  • Envelop VR
  • VR for Education
  • Gartner curve – then and now

 

 

 

Virtual reality industry leaders come together to create new association — from gvra.com

Excerpt:

CALIFORNIA — Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment [on 12/7/16] announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.

The goal of the Global Virtual Reality Association is to promote responsible development and adoption of VR globally. The association’s members will develop and share best practices, conduct research, and bring the international VR community together as the technology progresses. The group will also serve as a resource for consumers, policymakers, and industry interested in VR.

VR has the potential to be the next great computing platform, improving sectors ranging from education to healthcare, and contribute significantly to the global economy. Through research, international engagement, and the development of best practices, the founding companies of the Global Virtual Reality Association will work to unlock and maximize VR’s potential and ensure those gains are shared as broadly around the world as possible.

For more information, visit www.GVRA.com.

 

 

 

Occipital shows off a $399 mixed reality headset for iPhone — from techcrunch.com by Lucas Matney

Excerpt:

Occipital announced today that it is launching a mixed reality platform built upon its depth-sensing technologies called Bridge. The headset is available for $399 and starts shipping in March; eager developers can get their hands on an Explorer Edition for $499, which starts shipping next week.

 

 

From DSC:
While I hope that early innovators in the AR/VR/MR space thrive, I do wonder what will happen if and when Apple puts out their rendition/version of a new form of Human Computer Interaction (or forms) — such as integrating AR-capabilities directly into their next iPhone.

 

 

Enterprise augmented reality applications ready for prime time — from internetofthingsagenda.techtarget.com by Beth Stackpole
Pokémon Go may have put AR on the map, but the technology is now being leveraged for enterprise applications in areas like marketing, maintenance and field service.

Excerpt:

Unlike virtual reality, which creates an immersive, computer-generated environment, the less familiar augmented reality, or AR, technology superimposes computer-generated images and overlays information on a user’s real-world view. This computer-generated sensory data — which could include elements such as sound, graphics, GPS data, video or 3D models — bridges the digital and physical worlds. For an enterprise, the applications are boundless, arming workers walking the warehouse or selling on the shop floor, for example, with essential information that can improve productivity, streamline customer interactions and deliver optimized maintenance in the field.

 

 

15 virtual reality trends we’re predicting for 2017 — from appreal-vr.com by Yariv Levski

Excerpt (emphasis DSC):

2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year.

By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development.

VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.

 

 

Murdoch University hosts trial of virtual reality classroom TeachLivE — from communitynews.com.au by Josh Zimmerman

Excerpt:

IN an Australian first, education students will be able hone their skills without stepping foot in a classroom. Murdoch University has hosted a pilot trial of TeachLivE, a virtual reality environment for teachers in training.

 

The student avatars are able to disrupt the class in a range of ways that teachers may encounter such as pulling out mobile phones or losing their pen during class.

 

murdoch-university-teachlive-dec017

 

 

8 Cutting Edge Virtual Reality Job Opportunities — from appreal-vr.com by Yariv Levski
Today we’re highlighting the top 8 job opportunities in VR to give you a current scope of the Virtual Reality job market.

 

 

 

Epson’s Augmented Reality Glasses Are a Revolution in Drone Tech — from dronelife.com by Miriam McNabb

Excerpt:

The Epson Moverio BT-300, to give the smart glasses their full name, are wearable technology – lightweight, comfortable see-through glasses – that allow you to see digital data, and have a first person view (FPV) experience: all while seeing the real world at the same time. The applications are almost endless.

 

 

 

Volkswagen Electric Car To Feature Augmented Reality Navigation System — from gas2.org by Steve Hanley

Excerpt:

Volkswagen’s pivot away from diesel cars to electric vehicles is still a work in progress, but some details about its coming I.D. electric car — unveiled in Paris earlier this year — are starting to come to light. Much of the news is about an innovative augmented reality heads-up display Volkswagen plans to offer in its electric vehicles. Klaus Bischoff, head of the VW brand, says the I.D. electric car will completely reinvent vehicle instrumentation systems when it is launched at the end of the decade.

 

 

These global research centers are a proof that virtual reality is more than gaming — from haptic.al by Deniz Ergürel

Excerpt:

For decades, numerous research centers and academics around the world have been working the potential of virtual reality technology. Countless research projects undertaken in these centers are an important indicator that everything from health care to real estate can experience disruption in a few years.

  • Virtual Human Interaction Lab — Stanford University
  • Virtual Reality Applications Center — Iowa State University
  • Institute for Creative Technologies—USC
  • Medical Virtual Reality — USC
  • The Imaging Media Research Center — Korea Institute of Science and Technology
  • Virtual Reality & Immersive Visualization Group — RWTH Aachen University
  • Center For Simulations & Virtual Environments Research — UCIT
  • Duke immersive Virtual Environment —Duke University
  • Experimental Virtual Environments (EVENT) Lab for Neuroscience and Technology — Barcelona University
  • Immersive Media Technology Experiences (IMTE) — Norwegian University of Technology
  • Human Interface Technology Laboratory — University of Washington

 

 

Where Virtual and Physical Worlds Converge — from disruptionhub.com

Excerpt:

Augmented Reality (AR) dwelled quietly in the shadow of VR until earlier this year, when a certain app propelled it into the mainstream. Now, AR is a household term and can hold its own with advanced virtual technologies. The AR industry is predicted to hit global revenues of $90 billion by 2020, not just matching VR but overtaking it by a large margin. Of course, a lot of this turnover will be generated by applications in the entertainment industry. VR was primarily created by gamers for gamers, but AR began as a visionary idea that would change the way that humanity interacted with the world around them. The first applications of augmented reality were actually geared towards improving human performance in the workplace… But there’s far, far more to be explored.

 

 

VR’s killer app has arrived, and it’s Google Earth — from arstechnica.com by Sam Machkovech
Squishy geometry aside, you won’t find a cooler free VR app on any device.

Excerpt:

I stood at the peak of Mount Rainier, the tallest mountain in Washington state. The sounds of wind whipped past my ears, and mountains and valleys filled a seemingly endless horizon in every direction. I’d never seen anything like it—until I grabbed the sun.

Using my HTC Vive virtual reality wand, I reached into the heavens in order to spin the Earth along its normal rotational axis, until I set the horizon on fire with a sunset. I breathed deeply at the sight, then spun our planet just a little more, until I filled the sky with a heaping helping of the Milky Way Galaxy.

Virtual reality has exposed me to some pretty incredible experiences, but I’ve grown ever so jaded in the past few years of testing consumer-grade headsets. Google Earth VR, however, has dropped my jaw anew. This, more than any other game or app for SteamVR’s “room scale” system, makes me want to call every friend and loved one I know and tell them to come over, put on a headset, and warp anywhere on Earth that they please.

 

 

VR is totally changing how architects dream up buildings — from wired.com by Sam Lubell

Excerpt:

In VR architecture, the difference between real and unreal is fluid and, to a large extent, unimportant. What is important, and potentially revolutionary, is VR’s ability to draw designers and their clients into a visceral world of dimension, scale, and feeling, removing the unfortunate schism between a built environment that exists in three dimensions and a visualization of it that has until now existed in two.

 

 

How VR can democratize Architecture — from researchvr.podigee.io

Excerpt:

Many of the VR projects in Architecture are focused on the final stages of design process, basically for selling a house to a client. Thomas sees the real potential in the early stages: when the main decisions need to be made. VR is so good for this, as it helps for non professionals to understand and grasp the concepts of architecture very intuitively. And this is what we talked mostly about.

 

 

 

How virtual reality could revolutionize the real estate industry — from uploadvr.com by Benjamin Maltbie

 

 

 

Will VR disrupt the airline industry? Sci-Fi show meets press virtually instead of flying — from singularityhub.com by Aaron Frank

Excerpt:

A proposed benefit of virtual reality is that it could one day eliminate the need to move our fleshy bodies around the world for business meetings and work engagements. Instead, we’ll be meeting up with colleagues and associates in virtual spaces. While this would be great news for the environment and business people sick of airports, it would be troubling news for airlines.

 

 

How theaters are evolving to include VR experiences — from uploadvr.com by Michael Mascioni

 

 

 

#AI, #VR, and #IoT Are Coming to a Courthouse Near You! — from americanbar.org by Judge Herbert B. Dixon Jr.

Excerpt:

Imagine during one of your future trials that jurors in your courtroom are provided with virtual reality headsets, which allow them to view the accident site or crime scene digitally and walk around or be guided through a 3D world to examine vital details of the scene.

How can such an evidentiary presentation be accomplished? A system is being developed whereby investigators use a robot system inspired by NASA’s Curiosity Mars rover using 3D imaging and panoramic videography equipment to record virtual reality video of the scene.6 The captured 360° immersive video and photographs of the scene would allow recreation of a VR experience with video and pictures of the original scene from every angle. Admissibility of this evidence would require a showing that the VR simulation fairly and accurately depicts what it represents. If a judge permits presentation of the evidence after its accuracy is established, jurors receiving the evidence could turn their heads and view various aspects of the scene by looking up, down, and around, and zooming in and out.

Unlike an animation or edited video initially created to demonstrate one party’s point of view, the purpose of this type of evidence would be to gather data and objectively preserve the scene without staging or tampering. Even further, this approach would allow investigators to revisit scenes as they existed during the initial forensic examination and give jurors a vivid rendition of the site as it existed when the events occurred.

 

 

Microsoft goes long for mixed reality — from next.reality.news

Excerpt:

The theme running throughout most of this year’s WinHEC keynote in Shenzhen, China was mixed reality. Microsoft’s Alex Kipman continues to be a great spokesperson and evangelist for the new medium, and it is apparent that Microsoft is going in deep, if not all in, on this version of the future. I, for one, as a mixed reality or bust developer, am very glad to see it.

As part of the presentation, Microsoft presented a video (see below) that shows the various forms of mixed reality. The video starts with a few virtual objects in the room with a person, transitions into the same room with a virtual person, then becomes a full virtual reality experience with Windows Holographic.

 

 

NYU Steinhardt Edtech Accelerator’s 2016 Cohort Starting Up Chatbots, Augmented Reality Tools and More — from campustechnology.com by Sri Ravipati

Excerpt:

The cohort includes:

  • Admission Table (Bangalore, India), an artificial intelligence (AI) chatbot for university admissions;
  • Alumnify (San Francisco, CA), an enterprise platform for alumni services.
  • AugThat (New York, NY), augmented reality curricula for elementary and middle school students;
  • Bering (Brooklyn, NY), a data analytics platform for research scientists;
  • EduKids Connect Systems (New York, NY), an information system for child care providers;
  • NeuroNet Learning (Gainesville, FL), a research-based early reading program designed to assist students with essential reading, handwriting skills and math;
  • TheTalkList (San Diego, CA), a language learning exchange platform;
  • Trovvit (Brooklyn, NY), a social digital portfolio tool; and
  • Versity U (Jeffersonville, IN), a nursing exam platform.
 

New journal Science Robotics is established to chronicle the rise of the robots — from techcrunch.com by Devin Coldewey

Excerpt:

Robots have been a major focus in the technology world for decades and decades, but they and basic science, and for that matter everyday life, have largely been non-overlapping magisteria. That’s changed over the last few years, as robotics and every other field have come to inform and improve each other, and robots have begun to infiltrate and affect our lives in countless ways. So the only surprise in the news that the prestigious journal group Science has established a discrete Robotics imprint is that they didn’t do it earlier.

Editor Guang-Zhong Yang and president of the National Academy of Sciences Marcia McNutt introduce the journal:

In a mere 50 years, robots have gone from being a topic of science fiction to becoming an integral part of modern society. They now are ubiquitous on factory floors, build complex deep-sea installations, explore icy worlds beyond the reach of humans, and assist in precision surgeries… With this growth, the research community that is engaged in robotics has expanded globally. To help meet the need to communicate discoveries across all domains of robotics research, we are proud to announce that Science Robotics is open for submissions.

Today brought the inaugural issue of Science Robotics, Vol.1 Issue 1, and it’s a whopper. Despite having only a handful of articles, each is deeply interesting and shows off a different aspect of the robotics research world — though by no means do these few articles hit all the major regions of the field.

 

 

See also:

 

Excerpt:

Science Robotics has been launched to cover the most important advances in the development and application of robots, with interest in hardware and software as well as social interactions and implications.

From molecular machines to large-scale systems, from outer space to deep-sea exploration, robots have become ubiquitous, and their impact on our lives and society is growing at an accelerating pace. Science Robotics has been launched to cover the most important advances in robot design, theory, and applications. Science Robotics promotes the communication of new ideas, general principles, and original developments. Its content will reflect broad and important new applications of robots (e.g., medical, industrial, land, sea, air, space, and service) across all scales (nano to macro), including the underlying principles of robotic systems covering actuation, sensor, learning, control, and navigation. In addition to original research articles, the journal also publishes invited reviews. There are also plans to cover opinions and comments on current policy, ethical, and social issues that affect the robotics community, as well as to engage with robotics educational programs by using Science Robotics content. The goal of Science Robotics is to move the field forward and cross-fertilize different research applications and domains.

 

 

Amazon Opening Store That Will Eliminate Checkout — and Lines — from bloomberg.com by Jing Cao
At Amazon Seattle location items get charged to Prime account | New technology combines artificial intelligence and sensors

Excerpt:

Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.

The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.

 

 

 

Amazon Introduces ‘Amazon Go’ Retail Stores, No Checkout, No Lines — from investors.com

Excerpt:

Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.

Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.

 

 

Google DeepMind Makes AI Training Platform Publicly Available — from bloomberg.com by Jeremy Kahn
Company is increasingly embracing open-source initiatives | Move comes after rival Musk’s OpenAI made its robot gym public

Excerpt:

Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.

DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.

 

Related:
Alphabet DeepMind is inviting developers into the digital world where its AI learns to explore — from qz.com by Dave Gershgorn

 

 

 

After Retail Stumble, Beacons Shine From Banks to Sports Arenas — from bloomberg.com by Olga Kharif
Shipments of the devices expected to grow to 500 million

Excerpt (emphasis DSC):

Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.

Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.

Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed.

But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.

 

 

Google’s Hand-Fed AI Now Gives Answers, Not Just Search Results — from wired.com by Cade Metz

Excerpt:

Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.

“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”

That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.

Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.

 

 

Deep Learning in Production at Facebook — from re-work.co by Katie Pollitt

Excerpt:

Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.

At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.

 

 

The Artificial Intelligence Gold Rush — from foresightr.com by Mark Vickers
Big companies, venture capital firms and governments are all banking on AI

Excerpt:

Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.

  • Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
  • Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
  • Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
  • Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
  • Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
  • Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
    IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
  • Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
  • Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
  • Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
    Nvidia: Builds computer chips customized for deep learning.
  • Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
  • Shell: Launched a virtual assistant to answer customer questions.
  • Tesla Motors: Continues to work on self-driving automobile technologies.
  • Twitter: Created an AI-development team called Cortex and acquired several AI startups.

 

 

 

IBM Watson and Education in the Cognitive Era — from i-programmer.info by Nikos Vaggalis

Excerpt:

IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual  students in order to engage them through tailored learning approaches.

This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.

As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.

The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging  and counter-productive schooling system which has the  students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?

 

 

 

 

From DSC:
In the future, I’d like to see holograms provide stunning visual centerpieces for the entrance ways into libraries, or in our classrooms, or in our art galleries, recital halls, and more. The object(s), person(s), scene(s) could change into something else, providing a visually engaging experience that sets a tone for that space, time, and/or event.

Eventually, perhaps these types of technologies/setups will even be a way to display artwork within our homes and apartments.

 

hologram-earth

Image from 900lbs.com

 

 

 

Google Earth lets you explore the planet in virtual reality — from vrscout.com by Eric Chevalier

 

 

 

How virtual reality could change the way students experience education — from edtechmagazine.com by  by Andrew Koke and Anthony Guest-Scott
High-impact learning experiences may become the norm, expanding access for all students.

Excerpt:

The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.

The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.

“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”

 

 

 

Visionary: How 4 institutions are venturing into a new mixed reality — from ecampusnews.com by Laura Devaney
Mixed reality combines virtual and augmented realities for enhanced learning experiences–and institutions are already implementing it.

Excerpt:

Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.

At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.

At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.

 

 

 

ZapBox brings room-scale mixed reality to the masses — from slashgear.com by JC Torres

Excerpt:

As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!

 

 

 

 

Shakespeare’s Tempest gets mixed reality makeover — from bbc.com by Jane Wakefield

 

intel-flying-whale-at-ces-2014Intel’s flying whale was the inspiration for the technology in The Tempest

 

 

 

eon-reality-education-nov2016

 

 

 

Excerpts from the 9/23/16 School Library Journal Webcast:

vr-in-education-thejournal-sept2016

 

 

 

 

 

ar-vr-elearningguildfall2016

 

Table of Contents

  • Introduction
  • New Technologies: Do They Really Change Learning Strategies? — by Joe Ganci and Sherry Larson
  • Enhanced Realities: An Opportunity to Avoid the Mistakes of the Past — by David Kelly
  • Let the Use Case Drive What Gets Augmented—Not the Other Way Around — by Chad Udell
  • Augmented Reality: An Augmented Perspective — by Alexander Salas
  • Virtual Reality Will Be the Perfect Immersive Learning Environment — by Koreen Pagano
  • Will VR Succeed? Viewpoint from Within a Large Corporation — by John O’Hare
  • Will VR Succeed? Viewpoint from Running a VR Start-up — by Ishai Albert Jacob

 

 

 

From DSC:
I think Technical Communicators have a new pathway to pursue…check out this piece from Scope AR and Caterpillar.

 

scopear-nov2016

 

 

 

IBM Launches Experimental Platform for Embedding Watson into Any Device — from finance.yahoo.com

Excerpt:

SAN FRANCISCO, Nov. 9, 2016 /PRNewswire/ — IBM (NYSE: IBM) today unveiled the experimental release of Project Intu, a new, system-agnostic platform designed to enable embodied cognition. The new platform allows developers to embed Watson functions into various end-user device form factors, offering a next generation architecture for building cognitive-enabled experiences.

Project Intu, in its experimental form, is now accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub.

 

 

IBM and Topcoder Bring Watson to More than One Million Developers — from finance.yahoo.com

Excerpt:

SAN FRANCISCO, Nov. 9, 2016 /PRNewswire/ — At the IBM (NYSE: IBM) Watson Developer Conference, IBM announced a partnership with Topcoder, the premier global software development community comprised of more than one million designers, developers, data scientists, and competitive programmers, to advance learning opportunities for cognitive developers who are looking to harness the power of Watson to create the next generation of artificial intelligence apps, APIs, and solutions.  This partnership also benefits businesses that gain access to an increased talent pool of developers through the Topcoder Marketplace with experience in cognitive computing and Watson.

 

 

5 Ways Artificial Intelligence Is Shaping the Future of E-commerce — from entrepreneur.com by Sheila Eugenio
Paradoxically for a machine, AI’s greatest strength may be in creating a more personal experience for your customer. From product personalization to virtual personal shoppers.

Excerpt:

Here are three ways AI will impact e-commerce in the coming years:

  1. Visual search.
  2. Offline to online worlds merge.
  3. Personalization.

 

 

IBM to invest $3 billion to groom Watson for the Internet of Things — from healthcareitnews.com by Bernie Monegain
As part of the project, Big Blue will spend $200 million on a global Watson IoT headquarters in Munich

 

 

 

Man living with machine: IBM’s AI-driven Watson is learning quickly, expanding to new platforms — from business.financialpost.com by Lynn Greiner

Excerpt:

Getting kids interested in STEM subjects is an ongoing challenge, and Teacher Advisor with Watson, a free tool, will help elementary school teachers match materials with student needs. In its first phase, it’s being used by 200 teachers, assisting them in creating math lessons that engage students and help them learn. The plan is to roll it out to all U.S. elementary schools by year end. As time goes on, Watson will learn from teacher feedback and improve its recommendations. There is, Rometty said, an opportunity to also build in professional development resources.

 

 

Oxford University’s lip-reading AI is more accurate than humans, but still has a way to go — from qz.com by Dave Gershgorn

Excerpt:

Even professional lip-readers can figure out only 20% to 60% of what a person is saying. Slight movements of a person’s lips at the speed of natural speech are immensely difficult to reliably understand, especially from a distance or if the lips are obscured. And lip-reading isn’t just a plot point in NCIS: It’s an essential tool to understand the world for the hearing-impaired, and if automated reliably, could help millions.

A new paper (pdf) from the University of Oxford (with funding from Alphabet’s DeepMind) details an artificial intelligence system, called LipNet, that watches video of a person speaking and matches text to the movement of their mouth with 93.4% accuracy.

 

 

A school bus, virtual reality, & an out-of-this-world journey — from goodmenproject.com
“Field Trip To Mars” is the barrier-shattering outcome of an ambitious mission to give a busload of people the same, Virtual Reality experience – going to Mars.

Excerpt:

Inspiration was Lockheed‘s goal when it asked its creative resources, led by McCann, to create the world’s first mobile group Virtual Reality experience. As one creator notes, VR now is essentially a private, isolating experience. But wouldn’t it be cool to give a busload of people the same, simultaneous VR experience? And then – just to make it really challenging – put the whole thing on wheels?

“Field Trip To Mars” is the barrier-shattering outcome of this ambitious mission.

 

From DSC:
This is incredible! Very well done. The visual experience tracks the corresponding speeds of the bus and even turns of the bus.

 

 

 

lockheed-fieldtriptomarsfall2016

 

 

Ed Dept. Launches $680,000 Augmented and Virtual Reality Challenge — from thejournal.com by David Nagel

Excerpt:

The United States Department of Education (ED) has formally kicked off a new competition designed to encourage the development of virtual and augmented reality concepts for education.

Dubbed the EdSim Challenge, the competition is aimed squarely at developing students’ career and technical skills — it’s funded through the Carl D. Perkins Career and Technical Education Act of 2006 — and calls on developers and ed tech organizations to develop concepts for “computer-generated virtual and augmented reality educational experiences that combine existing and future technologies with skill-building content and assessment. Collaboration is encouraged among the developer community to make aspects of simulations available through open source licenses and low-cost shareable components. ED is most interested in simulations that pair the engagement of commercial games with educational content that transfers academic, technical, and employability skills.”

 

 

 

Virtual reality boosts students’ results — from raconteur.net b
Virtual and augmented reality can enable teaching and training in situations which would otherwise be too hazardous, costly or even impossible in the real world

Excerpt:

More recently, though, the concept described in Aristotle’s Nicomachean Ethics has been bolstered by further scientific evidence. Last year, a University of Chicago study found that students who physically experience scientific concepts, such as the angular momentum acting on a bicycle wheel spinning on an axel that they’re holding, understand them more deeply and also achieve significantly improved scores in tests.

 

 

 

 

 

 

 

Virtual and augmented reality are shaking up sectors — from raconteur.net by Sophie Charara
Both virtual and augmented reality have huge potential to leap from visual entertainment to transform the industrial and service sectors

 

 

 

 

Microsoft’s HoloLens could power tanks on a battlefield — from theverge.com by Tom Warren

Excerpt:

Microsoft might not have envisioned its HoloLens headset as a war helmet, but that’s not stopping Ukrainian company LimpidArmor from experimenting. Defence Blog reports that LimpidArmor has started testing military equipment that includes a helmet with Microsoft’s HoloLens headset integrated into it.

The helmet is designed for tank commanders to use alongside a Circular Review System (CRS) of cameras located on the sides of armored vehicles. Microsoft’s HoloLens gathers feeds from the cameras outside to display them in the headset as a full 360-degree view. The system even includes automatic target tracking, and the ability to highlight enemy and allied soldiers and positions.

 

 

 

Bring your VR to work — from itproportal.com by Timo Elliott, Josh Waddell 4 hours ago
With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer.

Excerpt:

With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer — and that’s a blind spot that companies and CIOs can’t afford to have. It hasn’t been that long since consumer demand for the iPhone and iPad forced companies, grumbling all the way, into finding business cases for them. Gartner has said that the next five to ten years will bring “transparently immersive experiences” to the workplace. They believe this will introduce “more transparency between people, businesses, and things” and help make technology “more adaptive, contextual, and fluid.”

If digitally enhanced reality generates even half as much consumer enthusiasm as smartphones and tablets, you can expect to see a new wave of consumerisation of IT as employees who have embraced VR and AR at home insist on bringing it to the workplace. This wave of consumerisation could have an even greater impact than the last one. Rather than risk being blindsided for a second time, organisations would be well advised to take a proactive approach and be ready with potential business uses for VR and AR technologies by the time they invade the enterprise.

 

In Gartner’s latest emerging technologies hype cycle, Virtual Reality is already on the Slope of Enlightenment, with Augmented Reality following closely.

 

 

 

VR’s higher-ed adoption starts with student creation — from edsurge.com by George Lorenzo

Excerpt:

One place where students are literally immersed in VR is at Carnegie Mellon University’s Entertainment Technology Center (ETC). ETC offers a two-year Master of Entertainment Technology program (MET) launched in 1998 and cofounded by the late Randy Pausch, author of “The Last Lecture.”

MET starts with an intense boot camp called the “immersion semester” in which students take a Building Virtual Worlds (BVW) course, a leadership course, along with courses in improvisational acting, and visual storytelling. Pioneered by Pausch, BVW challenges students in small teams to create virtual reality worlds quickly over a period of two weeks, culminating in a presentation festival every December.

 

 

Apple patents augmented reality mapping system for iPhone — from appleinsider.com by Mikey Campbell
Apple on Tuesday was granted a patent detailing an augmented reality mapping system that harnesses iPhone hardware to overlay visual enhancements onto live video, lending credence to recent rumors suggesting the company plans to implement an iOS-based AR strategy in the near future.

 

 

A bug in the matrix: virtual reality will change our lives. But will it also harm us? — from theguardian.stfi.re
Prejudice, harassment and hate speech have crept from the real world into the digital realm. For virtual reality to succeed, it will have to tackle this from the start

 

 

 

The latest Disney Research innovation lets you feel the rain in virtual reality — from haptic.al by Deniz Ergurel

Excerpt:

Virtual reality is a combination of life-like images, effects and sounds that creates an imaginary world in front of our eyes.

But what if we could also imitate more complex sensations like the feeling of falling rain, a beating heart or a cat walking? What if we could distinguish, between a light sprinkle and a heavy downpour in a virtual experience?

Disney Research?—?a network of research laboratories supporting The Walt Disney Company, has announced the development of a 360-degree virtual reality application offering a library of feel effects and full body sensations.

 

 

Relive unforgettable moments in history through Timelooper APP. | Virtual reality on your smartphone.

 

timelooper-nov2016

 

 

Literature class meets virtual reality — from blog.cospaces.io by Susanne Krause
Not every student finds it easy to let a novel come to life in their imagination. Could virtual reality help? Tiffany Capers gave it a try: She let her 7th graders build settings from Lois Lowry’s “The Giver” with CoSpaces and explore them in virtual reality. And: they loved it.

 

 

 

 

learningvocabinvr-nov2016

 

 

 

James Bay students learn Cree syllabics in virtual reality — from cbc.ca by Celina Wapachee and Jaime Little
New program teaches syllabics inside immersive world, with friendly dogs and archery

 

 

 

VRMark will tell you if your PC is ready for Virtual Reality — from engadget.com by Sean Buckley
Benchmark before you buy.

 

 

Forbidden City Brings Archaeology to Life With Virtual Reality — from wsj.com

 

 

holo.study

hololensdemos-nov2016

 

 

Will virtual reality change the way I see history? — from bbc.co.uk

 

 

 

Scientists can now explore cells in virtual reality — from mashable.com by Ariel Bogle

Excerpt:

After generations of peering into a microscope to examine cells, scientists could simply stroll straight through one.

Calling his project the “stuff of science fiction,” director of the 3D Visualisation Aesthetics Lab at the University of New South Wales (UNSW) John McGhee is letting people come face-to-face with a breast cancer cell.

 

 

 

 

Can Virtual Reality Make Us Care More? — from huffingtonpost.co.uk by Alex Handy

Excerpt:

In contrast, VR has been described as the “ultimate empathy machine.” It gives us a way to virtually put us in someone else’s shoes and experience the world the way they do.

 

 

 

Stanford researchers release virtual reality simulation that transports users to ocean of the future — from news.stanford.edu by Rob Jordan
Free science education software, available to anyone with virtual reality gear, holds promise for spreading awareness and inspiring action on the pressing issue of ocean acidification.

 

 

 

 

The High-end VR Room of the Future Looks Like This — from uploadvr.com by Sarah Downey

Excerpt:

This isn’t meant to be an exhaustive list, but if I missed something major, please tell me and I’ll add it. Also, please reach out if you’re working on anything cool in this space à sarah(at)accomplice(dot)co.

Hand and finger tracking, gesture interfaces, and grip simulation:

AR and VR viewers:

Omnidirectional treadmills:

Haptic feedback bodysuits:

Brain-computer interfaces:

Neural plugins:

  • The Matrix (film)
  • Sword Art Online (TV show)
  • Neuromancer (novel)
  • Total Recall (film)
  • Avatar (film)

3D tracking, capture, and/or rendering:

Eye tracking:

 VR audio:

Scent creation:

 

 

 

Robot Launch 2016 – Robohub Readers’ Pick round one — from robohub.org by Andra Keay

Excerpt:

For the next three weeks, Robohub readers can vote for their “Readers’ Pick” startup from the Robot Launch competition. Each week, we’ll be publishing 10 videos. Our ultimate Robohub Readers’ Favorites, along with lots of other prizes, will be announced at the end of November. Every week we’ll showcase different aspects of robotics startups and their business models: from agricultural to humanoid, from consumer to industrial and from hardware to robotics software. Make sure you vote for your favorite – below – by 18:00pm UTC, Wednesday 9 November, spread the word through social media using #robotlaunch2016 and come back next week for the next 10!

 

 

LEGO® MINDSTORMS® Education EV3 Classroom Solutions — from education.lego.com

Excerpt:

Students develop critical thinking and problem-solving skills in middle school. LEGO MINDSTORMS Education EV3 grows these 21st century skills through inquiry-based and active learning.

lego-nov2016

 

 

Vex Robotics

 

vexrobotics-nov2016

 

 

No need to stare at a screen: Kickstarter robot teaches kids to code using cards — from digitaltrends.com by Luke Dormehl

 

 

 

 



Addendums:

 



 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

whydeeplearningchangingyourlife-sept2016

 

Why deep learning is suddenly changing your life — from fortune.com by Roger Parloff

Excerpt:

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

 

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

 

 

ai-machinelearning-deeplearning-relationship-roger-fall2016

 

 

Graphically speaking:

 

ai-machinelearning-deeplearning-relationship-fall2016

 

 

 

“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

 

 

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

 

 

 

 

Introducing the new Surface family — from microsoft.com
To do great things, you need powerful tools that deliver an ideal balance of craftsmanship, performance, and versatility. Every Surface device is engineered with these things in mind, and you at the center. And that’s how the Surface family does more. Just like you.

 

 

microsoftintrossurfacedesktop2-10-26-16

 

Also see the new Surface Dial:

 

mssurfacedial-10-26-16

 

 

 

Microsoft ‘Surface Studio’ and ‘Dial’ Up Close  — from blogs.barrons.com by Tiernan Ray

Excerpt (emphasis DSC):

Microsoft (MSFT) this morning held an event in downtown Manhattan that included both an update to Windows, the “Creators” edition, and also new versions of the company’s Surface tablet computer, including a revamp of the “Surface Book” laptop and tablet combo device, and a new desktop machine called the “Surface Studio” that has the thinnest display ever made, the company claims.

Perhaps the most intriguing thing at the event was something called “Dial,” a rotating puck device that can function like a wireless mouse with the Studio, but can also be placed right on top of the display itself to bring up a context-specific menu of functions, or to perform actions like cut and paste.

 

 

 

Microsoft announces its first desktop PC, the $3,000 Surface Studio — from businessinsider.com by Steve Kovach

Excerpt (emphasis DSC):

Microsoft on Wednesday announced its first desktop PC, the Surface Studio.

It’s an all-in-one computer, designed to compete with Apple’s iMac. The PC is geared toward professionals, and it has high-end specs designed for tasks like video or photo editing.

But the real surprises are the adjustable display and a new accessory called the Surface Dial. The display can lie nearly flat on the table, giving graphics artists the ability to draw and work. The Surface Dial can be placed on the screen to bring up color palettes and other options. The Surface Dial will work with other Surface products — the Surface Pro and Surface Book — but you won’t be able to use it on the screens.

 

 

 

Microsoft wants to bring machine learning into the mainstream — from networkworld.com by Steven Max Patterson
Microsoft released the beta of the Cognitive Toolkit with machine learning models, infrastructure and development tools, enabling customers to start building

Excerpt (emphasis DSC):

Microsoft just released the open-source licensed beta release of the Microsoft Cognitive Toolkit on Github. This announcement represents a shift in Microsoft’s customer focus from research to implementation. It is an update to the Computational Network Toolkit (CNTK). The toolkit is a supervised machine learning system in the same category of other open-source projects such as Tensorflow, Caffe and Torch.

Microsoft is one of the leading investors in and contributors to the open machine learning software and research community. A glance at the Neural Information Processing Systems (NIPS) conference reveals that there are just four major technology companies committed to moving the field of neural networks forward: Microsoft, Google, Facebook and IBM.

This announcement signals Microsoft interest to bring machine learning into the mainstream. The open source license reveals Microsoft’s continued collaboration with the machine learning community.

 

 

 

Microsoft just democratized virtual reality with $299 headsets — from pcworld.com by Gordon Mah Ung

Excerpt:

VR just got a lot cheaper.

Microsoft on Wednesday morning said PC OEMs will soon be shipping VR headsets that enable virtual reality and mixed reality starting at $299.

Details of the hardware and how it works were sparse, but Microsoft said HP, Dell, Lenovo, Asus, and Acer will be shipping the headsets timed with its upcoming Windows 10 Creators Update, due in spring 2017.

Despite the relatively low price, the upcoming headsets may have a big advantage over HTC and Valve’s Vive and Facebook’s Oculus Rift: no need for separate calibration hardware to function. Both Vive and Oculus require multiple emitters on stands to be placed around a room for the positioning to function.

 

microsoft-299-vr-headsets-10-26-16

 

 

The 10 Coolest Features Coming to Windows 10 — from wired.com by Michael Calore

Excerpt:

Microsoft is gearing up for a Windows refresh. The Windows 10 Creators Update will arrive on all Windows 10 devices for free in the spring of 2017. Today, Microsoft showed off all the new features coming to the multi-mode OS. Here’s the best of what will be coming to your Windows PC or Surface device.

 

 

 

 

 

The Surface Studio Story: How Microsoft Reimagined The Desktop PC For Creativity — from fastcompany.com by Mark Sullivan
A 28-inch screen, a very special hinge, and a new type of input device add up to an experience conceived with artists and designers in mind.

 

 

 

IBM Watson Education and Pearson to drive cognitive learning experiences for college students — from prnewswire.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE: IBM) and Pearson (FTSE: PSON) the world’s learning company, today announced a new global education alliance intended to make Watson’s cognitive capabilities available to millions of college students and professors.

Combining IBM’s cognitive capabilities with Pearson’s digital learning products will give students a more immersive learning experience with their college courses, an easy way to get help and insights when they need it, all through asking questions in natural language just like they would with another student or professor. Importantly, it provides instructors with insights about how well students are learning, allowing them to better manage the entire course and flag students who need additional help.

For example, a student experiencing difficulty while studying for a biology course can query Watson, which is embedded in the Pearson courseware. Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

 

 

ibm-watson-2016

 

 

Udacity partners with IBM Watson to launch the AI Nanodegree — from venturebeat.com by Paul Sawers

Excerpt:

Online education platform Udacity has partnered with IBM Watson to launch a new artificial intelligence (AI) Nanodegree program.

Costing $1,600 for the full two-term, 26-week course, the AI Nanodegree covers a myriad of topics including logic and planning, probabilistic inference, game-playing / search, computer vision, cognitive systems, and natural language processing (NLP). It’s worth noting here that Udacity already offers an Intro to Artificial Intelligence (free) course and the Machine Learning Engineer Nanodegree, but with the A.I. Nanodegree program IBM Watson is seeking to help give developers a “foundational understanding of artificial intelligence,” while also helping graduates identify job opportunities in the space.

 

 

The Future Cognitive Workforce Part 1: Announcing the AI Nanodegree with Udacity — from ibm.com by Rob High

Excerpt:

As artificial intelligence (AI) begins to power more technology across industries, it’s been truly exciting to see what our community of developers can create with Watson. Developers are inspiring us to advance the technology that is transforming society, and they are the reason why such a wide variety of businesses are bringing cognitive solutions to market.

With AI becoming more ubiquitous in the technology we use every day, developers need to continue to sharpen their cognitive computing skills. They are seeking ways to gain a competitive edge in a workforce that increasingly needs professionals who understand how to build AI solutions.

It is for this reason that today at World of Watson in Las Vegas we announced with Udacity the introduction of a Nanodegree program that incorporates expertise from IBM Watson and covers the basics of artificial intelligence. The “AI Nanodegree” program will be helpful for those looking to establish a foundational understanding of artificial intelligence. IBM will also help aid graduates of this program with identifying job opportunities.

 

 

The Future Cognitive Workforce Part 2: Teaching the Next Generation of Builders — from ibm.com by Steve Abrams

Excerpt:

Announced today at World of Watson, and as Rob High outlined in the first post in this series, IBM has partnered with Udacity to develop a nanodegree in artificial intelligence. Rob discussed IBM’s commitment to empowering developers to learn more about cognitive computing and equipping them with the educational resources they need to build their careers in AI.

To continue on this commitment, I’m excited to announce another new program today geared at college students that we’ve launched with Kivuto Solutions, an academic software distributor. Via Kivuto’s popular digital resource management platform, students and academics around the world will now gain free access to the complete IBM Bluemix Portfolio — and specifically, Watson. This offers students and faculty at any accredited university – as well as community colleges and high schools with STEM programs – an easy way to tap into Watson services. Through this access, teachers will also gain a better means to create curriculum around subjects like AI.

 

 

 

IBM introduces new Watson solutions for professions — from finance.yahoo.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE:IBM) today unveiled a series of new cognitive solutions intended for professionals in marketing, commerce, supply chain and human resources. With these new offerings, IBM is enabling organizations across all industries and of all sizes to integrate new cognitive capabilities into their businesses.

Watson solutions learn in an expert way, which is critical for professionals that want to uncover insights hidden in their massive amounts of data to understand, reason and learn about their customers and important business processes. Helping professionals augment their existing knowledge and experience without needing to engage a data analyst empowers them to make more informed business decisions, spot opportunities and take action with confidence.

“IBM is bringing Watson cognitive capabilities to millions of professionals around the world, putting a trusted advisor and personal analyst at their fingertips,” said Harriet Green, general manager Watson IoT, Cognitive Engagement & Education. “Similar to the value that Watson has brought to the world of healthcare, cognitive capabilities will be extended to professionals in new areas, helping them harness the value of the data being generated in their industries and use it in new ways.”

 

 

 

IBM says new Watson Data Platform will ‘bring machine learning to the masses’ — from techrepublic.com by Hope Reese
On Tuesday, IBM unveiled a cloud-based AI engine to help businesses harness machine learning. It aims to give everyone, from CEOs to developers, a simple platform to interpret and collaborate on data.

Excerpt:

“Insight is the new currency for success,” said Bob Picciano, senior vice president at IBM Analytics. “And Watson is the supercharger for the insight economy.”

Picciano, speaking at the World of Watson conference in Las Vegas on Tuesday, unveiled IBM’s Watson Data Platform, touted as the “world’s fastest data ingestion engine and machine learning as a service.”

The cloud-based Watson Data Platform, will “illuminate dark data,” said Picciano, and will “change everything—absolutely everything—for everyone.”

 

 

 

See the #IBMWoW hashtag on Twitter for more news/announcements coming from IBM this week:

 

ibm-wow-hashtag-oct2016

 

 

 

 

Previous postings from earlier this month:

 

  • IBM launches industry first Cognitive-IoT ‘Collaboratory’ for clients and partners
    Excerpt:
    IBM have unveiled an €180 million investment in a new global headquarters to house its Watson Internet of Things business.  Located in Munich, the facility will promote new IoT capabilities around Blockchain and security as well as supporting the array of clients that are driving real outcomes by using Watson IoT technologies, drawing insights from billions of sensors embedded in machines, cars, drones, ball bearings, pieces of equipment and even hospitals. As part of a global investment designed to bring Watson cognitive computing to IoT, IBM has allocated more than $200 million USD to its global Watson IoT headquarters in Munich. The investment, one of the company’s largest ever in Europe, is in response to escalating demand from customers who are looking to transform their operations using a combination of IoT and Artificial Intelligence technologies. Currently IBM has 6,000 clients globally who are tapping Watson IoT solutions and services, up from 4,000 just 8 months ago.

 

 

cognitiveapproachhr-oct2016

 

 

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 
© 2025 | Daniel Christian