We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part III in a series of such postings that illustrate how quickly things are moving (Part I and Part II) and to ask:

  • How do we collectively start talking about the future that we want?
  • Then, how do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved?

As I mentioned in Part I, I want to again refer to Gerd Leonhard’s work as it is relevant here, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

Looking at several items below, ask yourself…is this the kind of future that we want?  There are some things mentioned below that could likely prove to be very positive and helpful. However, there are also some very troubling advancements and developments as well.

The point here is that we had better start talking and discussing the pros and cons of each one of these areas — and many more I’m not addressing here — or our dreams will turn into our nightmares and we will have missed what Edward Cornish and the World Future Society are often trying to get at.

 


 

Google’s Artificial Intelligence System Masters Game of ‘Go’ — from abcnews.go.com by Alyssa Newcomb

Excerpt:

Google just mastered one of the biggest feats in artificial intelligence since IBM’s Deep Blue beat Gary Kasparov at chess in 1997.

The search giant’s AlphaGo computer program swept the European champion of Go, a complex game with trillions of possible moves, in a five-game series, according Demis Hassabis, head of Google’s machine learning, who announced the feat in a blog post that coincided with an article in the journal Nature.

While computers can now compete at the grand master level in chess, teaching a machine to win at Go has presented a unique challenge since the game has trillions of possible moves.

Along these lines, also see:
Mastering the game of go with deep neural networks and tree search — from deepmind.com

 

 

 

Harvard is trying to build artificial intelligence that is as fast as the human brain — from futurism.com
Harvard University and IARPA are working together to study how AI can work as efficiently and effectively as the human brain.

Excerpt:

Harvard University has been given $28M by the Intelligence Advanced Projects Activity (IARPA) to study why the human brain is significantly better at learning and retaining information than artificial intelligence (AI). The investment into this study could potentially help researchers develop AI that’s faster, smarter, and more like human brains.

 

 

Digital Ethics: The role of the CIO in balancing the risks and rewards of digital innovation — from mis-asia.com by Kevin Wo; with thanks to Gerd Leonhard for this posting

What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing. More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalized customer experiences, as well as unprecedented productivity, efficiency, and innovation. However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.

According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things. Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.  

As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.

 

 

New algorithm points the way towards regrowing limbs and organs — from sciencealert.com by David Nield

Excerpt:

An international team of researchers has developed a new algorithm that could one day help scientists reprogram cells to plug any kind of gap in the human body. The computer code model, called Mogrify, is designed to make the process of creating pluripotent stem cells much quicker and more straightforward than ever before.

A pluripotent stem cell is one that has the potential to become any type of specialised cell in the body: eye tissue, or a neural cell, or cells to build a heart. In theory, that would open up the potential for doctors to regrow limbs, make organs to order, and patch up the human body in all kinds of ways that aren’t currently possible.

 

 

 

The world’s first robot-run farm will harvest 30,000 heads of lettuce daily — from techinsider.io by Leanna Garfield

Excerpt (from DSC):

The Japanese lettuce production company Spread believes the farmers of the future will be robots.

So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.

Don’t expect a bunch of humanoid robots to roam the halls, however; the robots look more like conveyor belts with arms. They’ll plant seeds, water plants, and trim lettuce heads after harvest in the Kyoto, Japan farm.

 

 

 

Drone ambulances may just be the future of emergency medical vehicles — from interestingengineering.com by Gabrielle Westfield

Excerpt:

Drones are advancing everyday. They are getting larger, faster and more efficient to control. Meanwhile the medical field keeps facing major losses from emergency response vehicles not being able to reach their destination fast enough. Understandable so, I mean especially in the larger cities where traffic is impossible to move swiftly through. Red flashing lights atop or not, sometimes the roads are just not capable of opening up. It makes total sense that the future of ambulances would be paved in the open sky rather than unpredictable roads.

.

 

 

 

Phone shop will be run entirely by Pepper robots — from telegraph.co.uk by

Excerpt (emphasis DSC):

Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.

The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.

 

 

 

Wise.io introduces first intelligent auto reply functionality for customer support organizations — from consumerelectronicsnet.com
Powered by Machine Learning, Wise Auto Response Frees Up Agent Time, Boosting Productivity, Accelerating Response Time and Improving the Customer Experience

Excerpt:

BERKELEY, CA — (Marketwired) — 01/27/16 — Wise.io, which delivers machine learning applications to help enterprises provide a better customer experience, today announced the availability of Wise Auto Response, the first intelligent auto reply functionality for customer support organizations. Using machine learning to understand the intent of an incoming ticket and determine the best available response, Wise Auto Response automatically selects and applies the appropriate reply to address the customer issue without ever involving an agent. By helping customer service teams answer common questions faster, Wise Auto Response removes a high percentage of tickets from the queue, freeing up agents’ time to focus on more complex tickets and drive higher levels of customer satisfaction.

 

 

Video game for treating ADHD looks to 2017 debut — from educationnews.org

Excerpt:

Akili Interactive Labs out of Boston has created a video game that they hope will help treat children diagnosed with attention-deficit hyperactivity disorder by teaching them to focus in a distracting environment.

The game, Project: EVO, is meant to be prescribed to children with ADHD as a medical treatment.  And after gaining $30.5 million in funding, investors appear to believe in it.  The company plans to use the funding to run clinical trials with plans to gain approval from the US Food and Drug Administration in order to be able to launch the game in late 2017.

Players will enter a virtual world filled with colorful distractions and be required to focus on specific tasks such as choosing certain objects while avoiding others.  The game looks to train the portion of the brain designed to manage and prioritize all the information taken in at one time.

 

Addendum on 1/29/16:

 

 

 

 

From DSC:
A close family member struggles with maintaining focus. She is easily distracted by noises and motions inside the classroom. When she’s distracted, there’s a loss of focus…which then results in errors and missed learning cues. Although she hasn’t been diagnosed as having Attention Deficit Disorder (ADD), she still struggles in this area.

That got me to wondering…

  • Could virtual reality be used to help students w/ Attention Deficit Disorder (ADD), and/or with Attention Deficit Hyperactivity Disorder (ADHD), and/or with folks like my family member who are easily distracted?
    .
  • That is, could students who are struggling within their current learning environments create their own, individualized VR-based learning environment that would better suit their learning preferences?  i.e., immerse oneself into a setting that’s quieter with less visual distractions. Or into a setting where there’s soft, mellow music playing in the background while studying by a gently rolling river (or from a choice of library-based settings, or choose from a variety of rooms that offer a great deal of “natural light,” or on the beach, or on a mountaintop, etc.)

Hmmm…

 

vr-students-alchemylearning

Image from:
http://alchemylearning.com/adopting-virtual-reality-for-education/

 

 

river-stream

Image from:
http://www.snipview.com/q/Wykoff_Run

 

 

 

From DSC:
Currently, you can add interactivity to your digital videos. For example, several tools allow you to do this, such as:

So I wonder…what might interactivity look like in the near future when we’re talking about viewing things in immersive virtual reality (VR)-based situations?  When we’re talking about videos made using cameras that can provide 360 degrees worth of coverage, how are we going to interact with/drive/maneuver around such videos? What types of gestures and/or input devices, hardware, and software are we going to be using to do so? 

What new forms of elearning/training/education will we have at our disposal? How will such developments impact instructional design/designers? Interaction designers? User experience designers? User interface designers? Digital storytellers?

Hmmm…

The forecast?  High engagement, interesting times ahead.

Also see:

  • Interactive video is about to get disruptive — from kineo.com by James Cory-Wright
    Excerpt:
    Seamless and immersive because it all happens within the video
    We can now have embedded hotspots (motion tags) that move within the video; we can use branching within the video to change the storyline depending on the decisions you make; we can show consequences of making that decision; we can add video within video to share expert views, link directly to other rich media or gather real-time data via social media tools – all without leaving the actual video. A seamless experience.
    .
  • Endless learning: Virtual reality in the classroom — from pixelkin.org by David Jagneaux
    Excerpt:
    What if you could be part of the audience for Martin Luther King Jr.’s riveting “I Have a Dream” speech? What if you could stand in a chemistry lab and experiment without any risk of harm or danger? What if you could walk the earth millions of years ago and watch dinosaurs? With virtual reality technology, these situations could become real. Virtual reality (VR) is a hot topic in today’s game industry, but games are only one aspect of the technology. I’ve had fun putting  on a headset and shooting  down ships in outer space. But VR also has the potential to enhance education in classrooms.
    .
  • Matter VR

 

MatterVR-Jan2016

 

Holograms are coming to a high street near you — from telegraph.co.uk by Rebecca Burn-Callander
Can you tell what’s real and what’s not?

Excerpt:

Completely realistic holograms, that will be generated when you pass a sensor, are coming to the high street.

Some will be used to advertise, others will have the ability to interact with you, and show you information. In shops, when you find a shirt you like, the technology is now here to bring up a virtual clothes rail showing you that same shirt in a variety of colours, and even tell you which ones are in stock, all using the same jaw-dropping imaging we have previously only experienced wearing 3D glasses at the cinema.

Holograms, augmented reality – which superimposes technology over the real world – and virtual reality (VR), its totally immersive counterpart, are tipped to be the hot trends in retail next year. Pioneers of the technology are set to find increasingly entertaining, useful and commercially viable ways of using it to tempt people into bricks-and-mortar stores, and fight back against the rise of online shopping.

 

 

 

 

WaveOptics’ technology could bring physical objects, such as books, to life in new ways

 

 

Completely realistic holograms, that will be generated when you pass a sensor, are coming to the high street.

 

 

From DSC:
What might our learning spaces offer us in the not-too-distant future when:

  • Sensors are built into most of our wearable devices?
  • Our BYOD-based devices serve as beacons that use machine-to-machine communications?
  • When artificial intelligence (AI) gets integrated into our learning spaces?
  • When the Internet of Things (IoT) trend continues to pick up steam?

Below are a few thoughts/ideas on what might be possible.

A faculty member walks into a learning space, the sensors/beacons communicate with each other, and the sections of lights are turned down to certain levels while the main display is turned on and goes to a certain site (the latter part occurred because the beacons had already authenticated the professor and had logged him or her into the appropriate systems in the background). Personalized settings per faculty member.

A student walks over to Makerspace #1 and receives a hologram that relays some 30,000-foot level instructions on what the initial problem to be solved is about. This has been done using the student’s web-based learner profile — whereby the sensors/beacons communicate who the student is as well as some basic information about what that particular student is interested in. The problem presented takes these things into consideration. (Think IBM Watson, with the focus being able to be directed towards each student.) The student’s interest is piqued, the problem gets their attention, and the stage is set for longer lasting learning. Personalized experiences per student that tap into their passions and their curiosities.

The ramifications of the Internet of Things (IoT) will likely involve the classroom at some point.  At least I hope they do. Granted, the security concerns are there, but the IoT wave likely won’t be stopped by security-related concerns. Vendors will find ways to address them, hackers will counter-punch, and the security-related wars will simply move/expand to new ground. But the wave won’t be stopped.

So when we talk about “classrooms of the future,” let’s think bigger than we have been thinking.

 

ThinkBiggerYet-DanielChristian-August282013

 

 

 

Also see:

What does the Internet of Things mean for meetings? — from meetingsnet.stfi.re by Betsy Bair

Excerpt:

The IoT has major implications for our everyday lives at home, as well as in medicine, retail, offices, factories, worksites, cities, or any structure or facility where people meet and interact.

The first application for meetings is the facility where you meet: doors, carpet, lighting, can all be connected to the Internet through sensors. You can begin to track where people are going, but it’s much more granular.

Potentially you can walk into a meeting space, it knows it’s you, it knows what you like, so your experience can be customized and personalized.

Right now beacons are fairly dumb, but Google and Apple are working on frameworks, building operating systems, that allow beacons to talk to each other.

 

 

Addendum on 1/14/16:

  • Huddle Space Products & Trends for 2016 — from avnetwork.com by Cindy Davis
    Excerpt:
    “The concept is that you should be able to walk into these rooms, and instead of being left with a black display, maybe a cable on the table, or maybe nothing, and not know what’s going on; what if when you walked into the room, the display was on, and it showed you what meeting room it was, who had the meeting room scheduled, and is it free, can just walk in and I use it, or maybe I am in the wrong room? Let’s put the relevant information up there, and let’s also put up the information on how to connect. Although there’s an HDMI cable at the table, here’s the wireless information to connect.
 

Touchpress for Apple TV

touchpress-jan2016

 

beethoven-jan2016

 

 

Earthlapse

 

Earthlapse-Jan2016

 

 

Expand your vocabulary with Elevate Showdown on Apple TV — from appadvice.com by Jeff Byrnes

Excerpt:

Compete to expand your vocabulary
With Elevate Showdown, you race to match words to descriptions, playing against your friends in group mode using a custom Apple TV controller app, or versus competitors from around the world with Game Center integration. In group mode, you can play against up to three other people, while Game Center pits you head-to-head with a competitor.

 

ElevateApp-AppleTV-Jan2016

 

 

 

10 must-have Apple TV apps — from pcmag.com by Jordan Minor
Enjoy the App Store experience on your television with our Apple TV app starter set.

Excerpt (some example apps):

 

 

You can now explore 360-degree videos on Apple TV, no VR headset required — from fastcompany.com by Peter Wade
With a new app by Disney-backed virtual reality firm Littlstar, Apple TV users can access the platform’s library of 360-degree videos.

Related item:

Littlstar is the first to bring immersive 360 video to Apple TV — from twinkle.littlstar.com

Excerpt:

New York, NY – December 22, 2015 – Littlstar, the premier global network dedicated to virtual reality and 360 video, today announced the launch of its Apple TV app. The app, which is the first to bring immersive content to the new Apple TV platform, gives users access to a wide range of 360 video content from well-known brands.

 

 

Everything you need to know about the new Apple TV App Store — from blog.appfigures.com

Excerpt:

AppleTVappsbycategory-dec2015

 

 

App showdown: Roku vs. Chromecast vs. Apple TV vs. Fire TV vs. Android TV — from macworld.com

 

 

 

From DSC:
Listed below are some potential tools/solutions regarding bringing in remote students and/or employees into face-to-face settings.

First of all, why pursue this idea/approach at all?

Because schools, colleges, universities, and businesses are already going through the efforts — and devoting the resources — to putting courses together and offering the courses in face-to-face settings.  So why not create new and additional revenue streams to the organization while also spreading the sphere of influence of the teachers, faculty members, trainers, and/or the experts?

The following tools offer some examples of the growing capabilities of doing so. These types of tools take some of the things that are already happening in active learning-based classrooms and opening up the learning to remote learners as well.

Eventually this will all be possible from your living room, using morphed
versions of today’s Smart/Connected “TVs”, VR-based devices, and the like.

————————

Bluescape

Excerpts from their website:

  • Each Bluescape workspace is larger than 145 football fields, a scale that allows teams to capture and build upon every aspect of a project.
  • A single Bluescape workspace enables unlimited users to work and collaborate in real time.
  • Edits to your Bluescape session happen instantly, so geographically distributed teams can collaborate in real time.
  • Write or type on multi-colored notecards that you can easily move and resize. Perfect for organizing and planning projects.
  • Ideate and quickly iterate by writing and drawing in a full range of colors and line thicknesses. Works with iOS devices and Bluescape multi-touch displays.
  • Add pictures and write on the workspace via the iOS App for iPads.
  • Securely access your Bluescape workspaces with a web browser, our iOS app, or our multi-touch displays.
  • Easily share what’s on your computer screen with other people.
  • Bluescape creates persistent online workspaces that you can access at any time that works for you.
  • Work with any popular website like Google, YouTube or CNN in your workspace.
  • Drag and drop files like JPEGs and PNGs into your Bluescape workspace for inspiration, analysis, and valuation.
  • Share your screen instantly during online or in-person meetings.
  • Use the same touch gestures as you do on smart phones, even handwriting on your iPad.

 

BlueScape-2016

BlueScape-2016-screens

 

 

 

 

Mezzanine, from Oblong

 

Mezzanine-By-Oblong-Jan2016

 

 

 

 

ThinkHub Demo: MultiSite Collaboration

 

 

 

Then there are tools that are not quite as robust as the above tools, but can also bring in remote learners into classroom settings:

 

Double Robotics Telepresence Robot

DoubleRobotics-Feb2014

 

doublerobotics dot com -- wheels for your iPad

 

Beam+

Beam-Plus=-2016

 

 

Anybots

Anybots-2016

 

 

 

iRobot

 

irobot-jan2016

 

 

Vgo

vgo-jan2016

 

 

…and there are other telepresence robots out there as well.

 

 

Some other somewhat related tools/solutions include:

Kubi

 

kubi-Jan2016

 

Swivl

Swivl-2016

 

 

Vaddio RoboSHOT PTZ cameras

The RoboSHOT 12 is for small to medium sized conference rooms. This model features a 12X optical zoom and a 73° wide angle horizontal field of view, which provides support for applications including UCC applications, videoconferencing, distance learning, lecture capture, telepresence and more.

The RoboSHOT 30 camera performs well in medium to large rooms. It features a 30X optical zoom with a 2.3° tele end to 65° wide end horizontal field of view and provides support for applications including House of Worship productions, large auditorium A/V systems, large distance learning classrooms, live event theatres with IMAG systems, large lecture theatres with lecture capture and more.

 

 

Panopto

 

Panopto-Jan2016

 

 

6 top iPad collaboration apps to bring remote teams closer together — from ipad.appstorm.net by Nick Mead

 

 

 

 

2016 technology predictions for CIOs — from enterprisersproject.com

Excerpts:

  • Enterprises powered by machine learning
  • Predictions on the cloud, the road, and more
  • The connected home is integrated
  • IT grows up, grows the business
  • Future disruptors based on human behavior
  • Competition heats up in the cloud

 

 

MIT’s amazing new app lets you program any object — from fastcodesign.com
The Reality Editor is a Minority Report style AR app that makes programming your smart home as easy as connecting the dots.

 

MITsRealityEditor-Dec2015

 

 

Take me away! Elderly home residents given virtual reality goggles to help them feel like they are travelling the world — from dailymail.co.uk by Belinda Cleary

  • Residents at a Perth nursing home are trialing virtual reality goggles
  • The technology will allow them to see the world without leaving their seats
  • It’s hoped the trial will bring back lost memories in dementia patients

 

perth-VR-elderly

 

NASA partners with Microsoft to provide holographic computing in space — from seriouswonder.com by B.J. Murphy

Excerpt:

Partnering with multinational technology company Microsoft, NASA has since been engaging with their astronauts to use HoloLens headsets to help them make complex computations and provide them with virtual aid as they work inside the ISS. Labeled Project Sidekick, this form of space-based holographic computing will help empower astronauts by allowing them to achieve greater autonomy in their work as they explore and connect back home at NASA headquarters.

With the Cygnus delivery of the HoloLens headsets, expect holographic computing to become a crucial facet of future space exploration – one more item to check off of our list on, “How to become more like Star Trek.”

 

 

How to try virtual reality today without breaking the bank — from bgr.com by Jacob Siegal

Excerpt:

2016 might be the year that virtual reality finally takes hold in the tech world. Sony, Microsoft and Oculus VR are all planning to launch their own hardware before the end of next year, with tons of developers already hard at work on games, apps and other software to ensure that VR hits the ground running.

But if you don’t want to wait until next year to see what VR has to offer, you can take a sneak peek at the innovations today without putting a strain on your wallet.

 

 

Breaking Down Billion-Dollar AR/VR Investment In The Last 12 Months — from techcrunch.com by Tim Merel

 

 

 

 

Which VR Headset Holds the Pole Position? — from statista.com by Felix Richter

 

 

 

The show goes on in Paris – through augmented-reality glasses — from theguardian.com by Barbara Casassus
If your French doesn’t go beyond bonjour, you can still enjoy a night at a Parisian theatre thanks to new glasses that provide simultaneous translations

Excerpt:

It’s Saturday night at Le Comédia theatre in central Paris and I’m staring at the stage through square plastic glasses. While the actors in the musical Mistinguett, Reine des Années Folles sing boisterously in French, the words appear simultaneously in English on a small screen in the right-hand lens. Though it’s not the same as watching the show unfettered, I find it surprisingly easy to follow the translated dialogue along with the action.

.

 

 

Immersive VR Education

 

ConeOfLearning-Dale-ImmersiveLearning-Dec2015

 

Also see Immersive VREducation’s:
ER VR Trailer – Virtual Reality Medical Training Simulation

 

 

 

Virtual-reality lab explores new kinds of immersive learning — from chronicle.com by Ellen Wexler

Excerpt:

That can have implications in distance learning, he said. For students attending class via webcam or video lecture, the video is two-dimensional, and the audio doesn’t sound as it would if they were in a real classroom. Mr. Duraiswami thinks the virtual-reality technology could help the experience feel more immersive. “If all you’re seeing is a bunch of things in front of you, you’re not as immersed,” Mr. Duraiswami said. “You want the instructor to feel as if they’re right in front of you.”

 

 

 

10 killer media applications enabled by ‘virtual reality’ headsets — from eweek.com by Mike Elgan
Virtual reality headsets can do much more than ‘virtual reality,’ a technical term that is badly defined in most news reports. Here are 10 rapidly developing applications.

 

 

Deakin University to launch virtual and augmented reality hub — from cio.com.au by Rebecca Merrett
Industry partners, as well and students and staff, can get their hands on latest virtual/augmented reality tech

Excerpt:

Deakin University will launch an Interactive Digital Centre Hub in Melbourne CBD in the first half of 2016, which will allow industry partners to access the latest virtual and augmented reality technology. Partnering with EON Reality, more than US$10 million has been poured into the facility and will be the first of dedicated centre to virtual and augmented reality in the Asian region. Having a strong group of researchers in virtual reality, Deakin University decided to open a hub to facilitate working with industry and host education programs and courses in this field.

 

 

Virtual reality could finally get people to care about climate change — from techinsider.io by Chris Weller

Excerpt:

As the founding director of Stanford’s Virtual Human Interaction Lab, Jeremy Bailenson firmly believes that statistics don’t make people care about issues.

Experiences do.

That’s why Bailenson has spent the last few years developing an underwater virtual reality (VR) experience that shows people firsthand how climate change impacts ocean health.

 

All the data in the world won’t make a problem seem real unless people care about it on an emotional level, he says. According to Bailenson, virtual reality solves that problem without creating new ones.

 

 

Virtual reality in 2016: The 10 biggest trends to watch — from techrepublic.com by Erin Carson
2016 promises to be a watershed year for virtual reality as a commercial product. Here’s what to expect.

 

Revolutionary tech for the real world — from createtomorrow.co.uk

Excerpt (emphasis DSC):

AR is also highly effective for education and training says Ronald Azuma who leads the AR team at Intel Labs. Why? “Because it makes instructions easier to understand by displaying them directly over the real-world objects that require manipulation, thus removing the cognitive load and ambiguity in spatially transforming directions from traditional media like manuals, text, images and videos into the situation at hand.”

 

 

 

Samsung launches Gear VR virtual reality headset in Australia, promises 360-degree web browsing — from news.com.au
AUSTRALIAN phone users will be able to play virtual reality games, watch 360-degree films, and navigate the web using their eyes as Samsung launches its third virtual reality headset.

 

 

CES 2016: driverless cars and virtual reality to dominate at world’s biggest technology show — from mirror.co.uk
The world’s biggest technology showcase kicks off in Las Vegas on 6 January 2016. Here’s what we know about what will be happening at the Consumer Electronics Show

 

 

Should your institution move into the Augmentarium future? — from ecampusnews.com by Ron Bethke
The University of Maryland, College Park, is leading the way in studying the innovative applications of augmented and virtual reality across a wide range of fields

Excerpt:

The potential applications of virtual and augmented reality in a host of disciplines–including education, science, medicine, the arts, entertainment and industry–are massive, say large institutions like the University of Maryland (UMD), whose Augmentarium serves as a potential instrumental model for innovative research facilities and universities looking to make their impact on the future.

 

 

Sundance 2016 dominated by VR, over 30 experiences listed — from vrfocus.com

Excerpt:

This year’s Sundance Film Festival in Park City, Utah was a surprise hit for virtual reality (VR) technology. It was here that Oculus VR revealed its new film-focused division, Oculus Story Studio, while plenty of other filmmakers and story tellers showcased their own projects using head-mounted displays (HMDs) in the festival’s New Frontier section. That section is set to return for the 2016 edition of the festival from 21st – 31st January, and is this time utterly dominated by VR experiences.

 

 

Virtual reality for all, finally — from scientificamerican.com by Larry Greenemeier
Will the new generation of headsets hitting the consumer electronics market deliver enhanced virtual-reality experiences at more affordable prices?

Excerpt:

You can be forgiven for rolling your eyes at the latest round of promises that virtual reality has finally arrived for the masses. Tech companies have been hanging their hats on that one for decades without much success, due to high prices and poorly rendered graphics that have given people headaches—literally.

Despite these missteps, a new generation of virtual-reality tech targeted at consumers has begun to hit the market, most prominently with Samsung’s $100 Gear VR visor released in late November. Both Gear VR and Google Cardboard—which starts at less than $20 and was launched in 2014—rely on a smartphone clipped or slid into their respective visors. The headset’s binocularlike lenses—between the phone and wearer—help deliver a 3-D VR experience. That makes the gadgets a relatively low-risk investment for consumers and enables tech companies to gauge public demand for virtual reality in advance of devices such as ones from Oculus, Sony and HTC slated for next year that feature more sophisticated embedded sensors and displays.

Now that VR headsets no longer cost tens of thousands of dollars the door is open for educational and social applications that are true to virtual reality’s roots, allowing people to learn and interact in digital classrooms and playgrounds.

 

Here’s what virtual reality means for kids stuck in the hospital — from techcrunch.com by Drew Olanoff

Excerpt:

Virtual reality is here to stay and it’s more important than just playing a game or watching a boxing match in a more immersive way. It could, and will, change lives. Imagine this kind of happiness in children’s hospitals everywhere, all of the time. Then think about how doctors can train for surgery virtually. Pretty amazing stuff, eh?

Addendum on 12/20/15:

 

From DSC:
Below are some further items that discuss the need for some frameworks, policies, institutes, research, etc. that deal with a variety of game-changing technologies that are quickly coming down the pike (if they aren’t already upon on).  We need such things to help us create a positive future.

Also see Part I of this thread of thinking entitled, “The need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!  There have been so many other items that came out since that posting, I felt like I needed to add another one here.

What kind of future do we want? How are we going to insure that we get there?

As the saying goes…”Just because we can do something, doesn’t mean we should.” Or another saying comes to my mind…”What could possibly go wrong with this? It’s a done deal.”

While some of the items below should have very positive impacts on society, I do wonder how long it will take the hackers — the ones who are bent on wreaking havoc — to mess up some of these types of applications…with potentially deadly consequences? Security-related concerns must be dealt with here.


 

5 amazing and alarming things that may be done with your DNA — from washingtonpost.com by Matt McFarland

Excerpt (emphasis DSC):

Venter is leading efforts to use digital technology to analyze humans in ways we never have before, and the results will have huge implications for society. The latest findings he described are currently being written up for scientific publications. Venter didn’t want to usurp the publications, so he wouldn’t dive into extensive detail of how his team has made these breakthroughs. But what he did share offers an exciting and concerning overview of what lies ahead for humanity. There are social, legal and ethical implications to start considering. Here are five examples of how digitizing DNA will change the human experience:

 

 

These are the decisions the Pentagon wants to leave to robots — from defenseone.com by Patrick Tucker
The U.S. military believes its battlefield edge will increasingly depend on automation and artificial intelligence.

Excerpt:

Conducting cyber defensive operations, electronic warfare, and over-the-horizon targeting. “You cannot have a human operator operating at human speed fighting back at determined cyber tech,” Work said. “You are going to need have a learning machine that does that.” He did not say  whether the Pentagon is pursuing the autonomous or automatic deployment of offensive cyber capabilities, a controversial idea to be sure. He also highlighted a number of ways that artificial intelligence could help identify new waveforms to improve electronic warfare.

 

 

Britain should lead way on genetically engineered babies, says Chief Scientific Adviser — from.telegraph.co.uk by Sarah Knapton
Sir Mark Walport, who advises the government on scientific matters, said it could be acceptable to genetically edit human embryos

Excerpt:

Last week more than 150 scientists and campaigners called for a worldwide ban on the practice, claiming it could ‘irrevocably alter the human species’ and lead to a world where inequality and discrimination were ‘inscribed onto the human genome.’

But at a conference in London [on 12/8/15], Sir Mark Walport, who advises the government on scientific matters, said he believed there were ‘circumstances’ in which the genetic editing of human embyros could be ‘acceptable’.

 

 

Cyborg Future: Engineers Build a Chip That Is Part Biological and Part Synthetic — from futurism.com

Excerpt:

Engineers have succeeded in combining an integrated chip with an artificial lipid bilayer membrane containing ATP-powered ion pumps, paving the way for more such artificial systems that combine the biological with the mechanical down the road.

 

 

Robots expected to run half of Japan by 2035 — from engadget.com by Andrew Tarantola
Something-something ‘robot overlords’.

Excerpt:

Data analysts Nomura Research Institute (NRI), led by researcher Yumi Wakao, figure that within the next 20 years, nearly half of all jobs in Japan could be accomplished by robots. Working with Professor Michael Osborne from Oxford University, who had previously investigated the same matter in both the US and UK, the NRI team examined more than 600 jobs and found that “up to 49 percent of jobs could be replaced by computer systems,” according to Wakao.

 

 

 

Cambridge University is opening a £10 million centre to study the impact of AI on humanity — from businessinsider.com by Sam Shead

Excerpt:

Cambridge University announced on [12/3/15] that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

 

Cambridge-Center-Dec2015

 

 

Tech leaders launch nonprofit to save the world from killer robots — from csmonitor.com by Jessica Mendoza
Elon Musk, Sam Altman, and other tech titans have invested $1 billion in a nonprofit that would help direct artificial intelligence technology toward positive human impact. 

 

 

 

 

2016 will be a pivotal year for social robots — from therobotreport.com by Frank Tobe
1,000 Peppers are selling each month from a big-dollar venture between SoftBank, Alibaba and Foxconn; Jibo just raised another $16 million as it prepares to deliver 7,500+ units in Mar/Apr of 2016; and Buddy, Rokid, Sota and many others are poised to deliver similar forms of social robots.

Excerpt:

These new robots, and the proliferation of mobile robot butlers, guides and kiosks, promise to recognize your voice and face and help you plan your calendar, provide reminders, take pictures of special moments, text, call and videoconference, order fast food, keep watch on your house or office, read recipes, play games, read emotions and interact accordingly, and the list goes on. They are attempting to be analogous to a sharp administrative assistant that knows your schedule, contacts and interests and engages with you about them, helping you stay informed, connected and active.

 

 

IBM opens its artificial mind to the world — from fastcompany.com by Sean Captain
IBM is letting companies plug into its Watson artificial intelligence engine to make sense of speech, text, photos, videos, and sensor data.

Excerpt:

Artificial intelligence is the big, oft-misconstrued catchphrase of the day, making headlines recently with the launch of the new OpenAI organization, backed by Elon Musk, Peter Thiel, and other tech luminaries. AI is neither a synonym for killer robots nor a technology of the future, but one that is already finding new signals in the vast noise of collected data, ranging from weather reports to social media chatter to temperature sensor readings. Today IBM has opened up new access to its AI system, called Watson, with a set of application programming interfaces (APIs) that allow other companies and organizations to feed their data into IBM’s big brain for analysis.

 

 

GE wants to give industrial machines their own social network with Predix Cloud — from fastcompany.com by Sean Captain
GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.

 

 

Foresight 2020: The future is filled with 50 billion connected devices — from ibmbigdatahub.com by Erin Monday

Excerpt:

By 2020, there will be over 50 billion connected devices generating continuous data.

This figure is staggering, but is it really a surprise? The world has come a long way from 1992, when the number of computers was roughly equivalent to the population of San Jose. Today, in 2015, there are more connected devices out there than there are human beings. Ubiquitous connectivity is very nearly a reality. Every day, we get a little closer to a time where businesses, governments and consumers are connected by a fluid stream of data and analytics. But what’s driving all this growth?

 

 

Designing robots that learn as effortlessly as babies — from singularityhub.com by Shelly Fan

Excerpt:

A wide-eyed, rosy-cheeked, babbling human baby hardly looks like the ultimate learning machine.

But under the hood, an 18-month-old can outlearn any state-of-the-art artificial intelligence algorithm.

Their secret sauce?

They watch; they imitate; and they extrapolate.

Artificial intelligence researchers have begun to take notice. This week, two separate teams dipped their toes into cognitive psychology and developed new algorithms that teach machines to learn like babies. One instructs computers to imitate; the other, to extrapolate.

 

 

Researchers have found a new way to get machines to learn faster — from fortune.com by  Hilary Brueck

Excerpt:

An international team of data scientists is proud to announce the very latest in machine learning: they’ve built a program that learns… programs. That may not sound impressive at first blush, but making a machine that can learn based on a single example is something that’s been extremely hard to do in the world of artificial intelligence. Machines don’t learn like humans—not as fast, and not as well. And even with this research, they still can’t.

 

 

Team showcase how good Watson is at learning — from adigaskell.org

Excerpt:

Artificial intelligence has undoubtedly come a long way in the last few years, but there is still much to be done to make it intuitive to use.  IBM’s Watson has been one of the most well known exponents during this time, but despite it’s initial success, there are issues to overcome with it.

A team led by Georgia Tech are attempting to do just that.  They’re looking to train Watson to get better at returning answers to specific queries.

 

 

Why The Internet of Things will drive a Knowledge Revolution. — from linkedin.com by David Evans

Excerpt:

As these machines inevitably connect to the Internet, they will ultimately connect to each other so they can share, and collaborate on their own findings. In fact, in 2014 machines got their own ”World Wide Web” called RoboEarth, in which to share knowledge with one another. …
The implications of all of this are at minimum twofold:

  • The way we generate knowledge is going to change dramatically in the coming years.
  • Knowledge is about to increase at an exponential rate.

What we choose to do with this newfound knowledge is of course up to us. We are about to face some significant challenges at scales we have yet to experience.

 

 

Drone squad to be launched by Tokyo police — from bbc.com

Excerpt:

A drone squad, designed to locate and – if necessary – capture nuisance drones flown by members of the public, is to be launched by police in Tokyo.

 

 

An advance in artificial intelligence rivals human abilities — from todayonline.com by John Markoff

Excerpt:

NEW YORK — Computer researchers reported artificial-intelligence advances [on Dec 10] that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

 

 

Somewhat related:

Novo Nordisk, IBM Watson Health to create ‘virtual doctor’ — from wsj.com by Denise Roland
Software could dispense treatment advice for diabetes patients

Excerpt:

Novo Nordisk A/S is teaming up with IBM Watson Health, a division of International Business Machines Corp., to create a “virtual doctor” for diabetes patients that could dispense treatment advice such as insulin dosage.

The Danish diabetes specialist hopes to use IBM’s supercomputer platform, Watson, to analyze health data from diabetes patients to help them manage their disease.

 

 

Why Google’s new quantum computer could launch an artificial intelligence arms race — from washingtonpost.com

 

 

 

8 industries robots will completely transform by 2025 — from techinsider.io

 

 

 

Addendums on 12/17/15:

Russia and China are building highly autonomous killer robots — from businessinsider.com.au by Danielle Muoi

Excerpt:

Russia and China are creating highly autonomous weapons, more commonly referred to as killer robots, and it’s putting pressure on the Pentagon to keep up, according to US Deputy Secretary of Defense Robert Work. During a national-security forum on Monday, Work said that China and Russia are heavily investing in a roboticized army, according to a report from Defense One.

Your Algorithmic Self Meets Super-Intelligent AI — from techcrunch.com by Jarno M. Koponen

Excerpt:

At the same time, your data and personalized experiences are used to develop and train the machine learning systems that are powering the Siris, Watsons, Ms and Cortanas. Be it a speech recognition solution or a recommendation algorithm, your actions and personal data affect how these sophisticated systems learn more about you and the world around you.

The less explicit fact is that your diverse interactions — your likes, photos, locations, tags, videos, comments, route selections, recommendations and ratings — feed learning systems that could someday transform into superintelligent AIs with unpredictable consequences.

As of today, you can’t directly affect how your personal data is used in these systems

 

Addendum on 12/20/15:

 

Addendum on 12/21/15:

  • Facewatch ‘thief recognition’ CCTV on trial in UK stores — from bbc.com
    Excerpts (emphasis DSC):
    Face-recognition camera systems should be used by police, he tells me. “The technology’s here, and we need to think about what is a proportionate response that respects people’s privacy,” he says.

    “The public need to ask themselves: do they want six million cameras painted red at head height looking at them?

 

Addendum on 1/13/16:

 

The North Face uses IBM’s Watson to make online shopping smarter — from thestreet.com by Rebecca Borison

Excerpt:

Aiming to solve one of e-commerce’s challenges of not offering personalized service, VF Corp’s “The North Face” on Monday launched a new online shopping tool using IBM’s Watson artificial intelligence system.

The tool, which is powered by Fluid and called XPS, guides a consumer through the online store to better find what he or she is looking for.

E-commerce today doesn’t generally give the personal attention a consumer might get when he walks into a store and is greeted by a human being. This new tool seeks to address that challenge.

 

NorthFace-Watson-Dec2015

 

 

Watch this robot solve a Rubik’s Cube in a world record 2.39 seconds — from singularityhub.com by Jason Dorrier

.

 

 

Novo Nordisk, IBM Watson Health to create ‘virtual doctor’ — from wsj.com by Denise Roland
Software could dispense treatment advice for diabetes patients

Excerpt:

Novo Nordisk A/S is teaming up with IBM Watson Health, a division of International Business Machines Corp., to create a “virtual doctor” for diabetes patients that could dispense treatment advice such as insulin dosage.

The Danish diabetes specialist hopes to use IBM’s supercomputer platform, Watson, to analyze health data from diabetes patients to help them manage their disease.

 

 

 

CES 2016: driverless cars and virtual reality to dominate at world’s biggest technology show — from mirror.co.uk
The world’s biggest technology showcase kicks off in Las Vegas on 6 January 2016. Here’s what we know about what will be happening at the Consumer Electronics Show

 

 

 

Build an automatic cookie decorating machine with LEGO Mindstorms — from lifehacker.com by Patrick Allan

Excerpt:

Decorating cookies by hand can be a pleasant activity, but with a LEGO Mindstorms set, you can crank out a bunch of perfectly iced cookies in no time at all.

 

 

World’s First Holographic Navigation System — from machinetomachinemagazine.com

Excerpt:

PARIS – The United States is the first commercial market to receive two innovative telematics devices that apply aerospace technology to land navigation. WayRay Navion is an augmented reality navigation system that projects holographic GPS imagery and driver notifications onto the windshield of a car, a first-of-its-kind for the automobile aftermarket. WayRay Element is a smart tracker that can be plugged into the diagnostics port of any automobile for monitoring driver performance, safety and fuel efficiency. The solutions arrive courtesy of WayRay, a Swiss startup dedicated to the advancement of connected car telematics, and Orange Business Services, a B2B global telecom operator and IT solutions integrator.

 

 

 

Will Lynda.com/LinkedIn.com pursue this powerful vision with an organization like IBM? If so, look out!

From DSC:
Back in July of 2012, I put forth a vision that I called Learning from the Living [Class]Room

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

It’s a vision that involves a multitude of technologies — technologies and trends that we continue to see being developed and ones that could easily converge in the not-too-distant future to offer us some powerful opportunities for lifelong learning! 

Consider that in won’t be very long before a learner will be able to reinvent himself/herself throughout their lifetime, for a very affordable price — while taking ala carte courses from some of the best professors, trainers, leaders, and experts throughout the world, all from the comfort of their living room. (Not to mention tapping into streams of content that will be available on such platforms.)

So when I noticed that Lynda.com now has a Roku channel for the big screen, it got my attention.

 

lyndadotcom-roku-channel-dec2015

 

Lets add a few more pieces to the puzzle, given that some other relevant trends are developing quite nicely:

  • tvOS-based apps are now possible — and already there are over 2600 of them and it’s only been a month or so since Apple made this new platform available to the masses
  • Now, let’s add the ability to take courses online via a virtual reality interface — globally, at any time; VR is poised to have some big years in 2016 and 2017!
  • Lynda.com and LinkedIn.com’s fairly recent merger and their developing capabilities to offer micro-credentials, badges, and competency-based education (CBE) — while keeping track of the courses that a learner has taken
  • The need for lifelong learning is now a requirement, as we need to continually reinvent ourselves — especially given the increasing pace of change and as complete industries are impacted (broadsided), almost overnight
  • Big data, algorithms, and artificial intelligence (AI) continue to pick up steam; for example, consider the cognitive computing capabilities being developed in IBM’s Watson — which should be able to deliver personalized digital playlists and likely some level of intelligent tutoring as well
  • Courses could be offered at a fraction of the cost, as MOOC-sized classes could distribute the costs over a greater # of people and back end systems could help grade/assess the students’ work; plus the corporate world continues to use MOOCs to cost-effectively train their employees across the globe (MOOCs would thrive on such a tvOS-based platform, whereby students could watch lectures, demonstrations, and simulations on the big screen and then communicate with each other via their second screens*)
  • As the trends of machine-to-machine communications (M2M) and the Internet of Things (IoT) pick up, relevant courses/modules will likely be instantly presented to people to learn about a particular topic or task.  For example, I purchased a crib and I want to know how to put it together. The chip in the crib communicates to my Smart TV or to my augmented reality glasses/headset, and then a system loads up some multimedia-based training/instructions on how to put it together.
  • Streams of content continue to be developed and offered — via blogs, via channels like Periscope and Meerkat, via social media-based channels, and via other channels — and these streams of multimedia-based content should prove to be highly useful to individual learners as well as for communities of practice

Anyway, these next few years will be packed with change — the pace of which will likely take us by surprise. We need to keep our eyes upward and outward — peering into the horizons rather than looking downwards — doing so should reduce the chance of us getting broadsided!

*It’s also possible that AR and VR will create
a future whereby we only need 1 “screen”

 

The pace has changed significantly and quickly

 

 

Addendum:
After I wrote/published the item above…it was interesting to then see the item below:

IBM opens Watson IoT Global Headquarters, extends power of cognitive computing to a connected world — from finance.yahoo.com
1000 Munich-based experts to drive IoT and industry 4.0 innovation
Launches eight new IoT client experience centers worldwide
Introduces Watson API Services for IoT on the IBM Cloud

Excerpt:

MUNICH, Dec. 15, 2015 /PRNewswire/ — IBM (NYSE: IBM) today announced the opening of its global headquarters for Watson Internet of Things (IoT), launching a series of new offerings, capabilities and ecosystem partners designed to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT.  These new offerings will be available through the IBM Watson IoT Cloud, the company’s global platform for IoT business and developers.

 

 

7 unexpected virtual reality use cases — from techcrunch.com by Andrew Thomson

Excerpt:

How VR will be used, and the changes that the technology will make to the day-to-day lives of regular people is still a matter of speculation. Gamers are warming up their trigger fingers for a new level of immersive gaming, and the field of entertainment will be transformed by the changes. But use cases in other industries could be just as transformative.

Indeed, some amazing and inventive new ways to use VR technology are already appearing that could dramatically impact people in their daily lives.

 

UnexpectedCases-VR-TechcrunchDec2015

 

Also see:

 

Unimersive-Dec2015

 

Also see:

 

zspace-dec2015

 

Also see:

Excerpt:

“VR is an individual experience. We’re looking at less obvious VR applications.”

One of these is education. To which end, Mr Hirsch took me into another room to watch a two-minute educational VR video Zypre have made with the Smithsonian National Air and Space Museum, along with some of the Avatar team, using AMD’s technology. It depicts the Wright brothers’ 1903 flight at Kitty Hawk, North Carolina.

The film took six months to make, with computer-generated photorealistic visuals and every detail overseen by historians. I watched it on a prototype of the much-heralded Oculus Rift VR headset, expected out early next year.

It was several times more startling than the VR footage I described in April. It was more than virtual reality; it was pretty much . . . reality.

It’s not enough to say that, standing in a stuffy, darkened room in LA, I truly felt I was on a beach in North Carolina in 1903.

It was way more vivid than that. I even thought I felt the sea breeze in my face, then the backdraught from the propeller of the brothers’ flying machine. I shouted out that I could feel the wind and the techies surrounding me laughed. Apparently, a lot of people say that. It seems the brain is so fooled that it extrapolates and adds effects it thinks should be there. I have to confess, my American history is so sketchy I didn’t even know the flight was on a beach.

 

Also see:

 

MShololens-dec2015

 

See more information re:the
Microsoft HoloLens Development Edition.

 

Colleges begin to take virtual reality seriously — from ecampusnews.com by Abi Mandelbaum

Excerpt:

Looking to the future, as adoption of VR increases among universities, the technology will be used in more innovative ways.

Currently, universities offer options for distance learners to take online classes; soon, colleges will use VR technology to fully immerse students in the college experience, allowing them to feel present in a classroom discussion or lecture, regardless of distance. Universities can either record these experiences for later use, or use live-streaming virtual reality—a technology that has only recently begun to catch on with major brands and institutions.

The combination of all this technology could result in something like a virtual Rhodes Scholar program, where students can take part in live classes with top professors at universities all over the world, without having to leave their regular school.

The technology will also offer an invaluable means of learning for those in the social sciences and medical fields, which often require “hands-on” experiences. Eventually, VR technology will allow students to be placed in the shoes of patients to give them insight into how they experience the world. Organizations like the Virtual Human Interaction Lab are already studying the psychological effects of VR as related to empathy. This could allow future public policy students to experience first-hand what it means to be a refugee in a third-world country, or optometry students to experience what life is like for the vision-impaired.

Outside of lectures and hands-on learning, VR also offers schools the chance to immerse students in important cultural events. Educators are already using video to enhance lessons; in the near future, VR will be used in a similar manner.

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

The Current State of Machine Intelligence — from Shivon Zilis; with thanks to Ronald van Loon for posting this on Twitter

Excerpt:

I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact.

The most exciting part for me was seeing how much is happening the the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves.

 

 

 

Also see:

 

machinelearningconference-dec2015

 
© 2025 | Daniel Christian