Explosive IoT growth could produce skills shortage — from rtinsights.com by Joe McKendrick

Excerpts:

CIO’s Sharon Florentine took a look at data from global freelance marketplace Upwork, based on annual job posting growth and skills demand. The following are leading IoT skills Florentine identified that will be demand as the IoT proliferates, with level the growth seen over a one-year period:

Circuit design (231% growth): Builds miniaturized circuit boards for sensors and devices.

Microcontroller programming (225% growth): Writes code that provides intelligence to microcontrollers, the embedded chips within IoT devices.

AutoCAD (216% growth): Designs the devices.

Machine learning (199% growth): Writes the algorithms that recognize data patterns within devices.

Security infrastructure (194% growth): Identifies and integrates the standards, protocols and technologies that protect devices, as well as the data inside.

Big data (183% growth): Data scientists and engineers “who can collect, organize, analyze and architect disparate sources of data.” Hadoop and Apache Spark are two areas with particularly strong demand.

 

From DSC:
(With thanks to Woontack Woo for his posting this via his paper.li entitled “#AR #CAMAR for Ubiquitous VR”)

Check this out!

On December 3rd, the Legend of Sword opera comes to Australia — but this is no ordinary opera!  It is a “holographic sensational experience!” Set designers and those involved with drama will need to check this out. This could easily be the future of set design!

But not only that, let’s move this same concept over to the world of learning.  What might augmented reality do for how our learning spaces look and act like in the future?  What new affordances and experiences could they provide for us? This needs to be on our radars. 

Some serious engagement might be heading our way!

 

 

Per this web page:

Legend of Sword 1 is a holographic sensational experience that has finished its 2nd tour in China. A Chinese legend of the ages to amaze and ignite your imagination. First time ever such a visual spectacular stage in Australia on Sat 3rd Dec only. Performed in Chinese with English subtitles.

Legend of Sword and Fairy 1 is based on a hit video game in China. Through the hardworking of the renowned production team, the performance illustrates the beautiful fantasy of game on stage, and allow the audience feel like placing themselves in the eastern fairy world. With the special effects with the olfactory experience, and that actors performing and interact with audience at close distance, the eastern fairy world is realised on stage. It is not only a play with beautiful scenes, but also full of elements from oriental style adventure. The theatre experience will offer much more than a show, but the excitement of love and adventure.

 

Per this web page:

Legend of Sword and Fairy 1 was premiered in April 2015 at Shanghai Cultural Plaza, which set off a frenzy of magic in Shanghai, relying on the perfect visual and 5D all-round sensual experience. Because of the fantasy theme that matches with top visual presentation, Legend of Sword and Fairy 1 became the hot topic in Shanghai immediately. With only just 10 performances at the time, its Weibo topic hits have already exceeded 100 million mark halfway.

So far, Legend of Sword and Fairy 1 has finished its second tour in a number of cities in China, including Beijing, Chongqing, Chengdu, Nanjing, Xiamen, Qingdao, Shenyang, Dalian, Wuxi, Ningbo, Wenzhou, Xi’an, Shenzhen, Dongguan, Huizhou, Zhengzhou, Lishui, Ma’anshan, Kunshan, Changzhou etc.

 

 

legendofsword-china-australia-2016

 

 

 

Google Earth lets you explore the planet in virtual reality — from vrscout.com by Eric Chevalier

 

 

 

How virtual reality could change the way students experience education — from edtechmagazine.com by  by Andrew Koke and Anthony Guest-Scott
High-impact learning experiences may become the norm, expanding access for all students.

Excerpt:

The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.

The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.

“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”

 

 

 

Visionary: How 4 institutions are venturing into a new mixed reality — from ecampusnews.com by Laura Devaney
Mixed reality combines virtual and augmented realities for enhanced learning experiences–and institutions are already implementing it.

Excerpt:

Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.

At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.

At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.

 

 

 

ZapBox brings room-scale mixed reality to the masses — from slashgear.com by JC Torres

Excerpt:

As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!

 

 

 

 

Shakespeare’s Tempest gets mixed reality makeover — from bbc.com by Jane Wakefield

 

intel-flying-whale-at-ces-2014Intel’s flying whale was the inspiration for the technology in The Tempest

 

 

 

eon-reality-education-nov2016

 

 

 

Excerpts from the 9/23/16 School Library Journal Webcast:

vr-in-education-thejournal-sept2016

 

 

 

 

 

ar-vr-elearningguildfall2016

 

Table of Contents

  • Introduction
  • New Technologies: Do They Really Change Learning Strategies? — by Joe Ganci and Sherry Larson
  • Enhanced Realities: An Opportunity to Avoid the Mistakes of the Past — by David Kelly
  • Let the Use Case Drive What Gets Augmented—Not the Other Way Around — by Chad Udell
  • Augmented Reality: An Augmented Perspective — by Alexander Salas
  • Virtual Reality Will Be the Perfect Immersive Learning Environment — by Koreen Pagano
  • Will VR Succeed? Viewpoint from Within a Large Corporation — by John O’Hare
  • Will VR Succeed? Viewpoint from Running a VR Start-up — by Ishai Albert Jacob

 

 

 

From DSC:
I think Technical Communicators have a new pathway to pursue…check out this piece from Scope AR and Caterpillar.

 

scopear-nov2016

 

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

From DSC:
If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:

For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:

  • Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
  • Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
  • Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
  • Peaces of scripture, with links to Biblegateway.com or other sites
  • Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
  • Etc.

A person could turn the app’s notifications on or off at any time.  The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.

 

 

 

IBM Launches Experimental Platform for Embedding Watson into Any Device — from finance.yahoo.com

Excerpt:

SAN FRANCISCO, Nov. 9, 2016 /PRNewswire/ — IBM (NYSE: IBM) today unveiled the experimental release of Project Intu, a new, system-agnostic platform designed to enable embodied cognition. The new platform allows developers to embed Watson functions into various end-user device form factors, offering a next generation architecture for building cognitive-enabled experiences.

Project Intu, in its experimental form, is now accessible via the Watson Developer Cloud and also available on Intu Gateway and GitHub.

 

 

IBM and Topcoder Bring Watson to More than One Million Developers — from finance.yahoo.com

Excerpt:

SAN FRANCISCO, Nov. 9, 2016 /PRNewswire/ — At the IBM (NYSE: IBM) Watson Developer Conference, IBM announced a partnership with Topcoder, the premier global software development community comprised of more than one million designers, developers, data scientists, and competitive programmers, to advance learning opportunities for cognitive developers who are looking to harness the power of Watson to create the next generation of artificial intelligence apps, APIs, and solutions.  This partnership also benefits businesses that gain access to an increased talent pool of developers through the Topcoder Marketplace with experience in cognitive computing and Watson.

 

 

5 Ways Artificial Intelligence Is Shaping the Future of E-commerce — from entrepreneur.com by Sheila Eugenio
Paradoxically for a machine, AI’s greatest strength may be in creating a more personal experience for your customer. From product personalization to virtual personal shoppers.

Excerpt:

Here are three ways AI will impact e-commerce in the coming years:

  1. Visual search.
  2. Offline to online worlds merge.
  3. Personalization.

 

 

IBM to invest $3 billion to groom Watson for the Internet of Things — from healthcareitnews.com by Bernie Monegain
As part of the project, Big Blue will spend $200 million on a global Watson IoT headquarters in Munich

 

 

 

Man living with machine: IBM’s AI-driven Watson is learning quickly, expanding to new platforms — from business.financialpost.com by Lynn Greiner

Excerpt:

Getting kids interested in STEM subjects is an ongoing challenge, and Teacher Advisor with Watson, a free tool, will help elementary school teachers match materials with student needs. In its first phase, it’s being used by 200 teachers, assisting them in creating math lessons that engage students and help them learn. The plan is to roll it out to all U.S. elementary schools by year end. As time goes on, Watson will learn from teacher feedback and improve its recommendations. There is, Rometty said, an opportunity to also build in professional development resources.

 

 

Oxford University’s lip-reading AI is more accurate than humans, but still has a way to go — from qz.com by Dave Gershgorn

Excerpt:

Even professional lip-readers can figure out only 20% to 60% of what a person is saying. Slight movements of a person’s lips at the speed of natural speech are immensely difficult to reliably understand, especially from a distance or if the lips are obscured. And lip-reading isn’t just a plot point in NCIS: It’s an essential tool to understand the world for the hearing-impaired, and if automated reliably, could help millions.

A new paper (pdf) from the University of Oxford (with funding from Alphabet’s DeepMind) details an artificial intelligence system, called LipNet, that watches video of a person speaking and matches text to the movement of their mouth with 93.4% accuracy.

 

 

A school bus, virtual reality, & an out-of-this-world journey — from goodmenproject.com
“Field Trip To Mars” is the barrier-shattering outcome of an ambitious mission to give a busload of people the same, Virtual Reality experience – going to Mars.

Excerpt:

Inspiration was Lockheed‘s goal when it asked its creative resources, led by McCann, to create the world’s first mobile group Virtual Reality experience. As one creator notes, VR now is essentially a private, isolating experience. But wouldn’t it be cool to give a busload of people the same, simultaneous VR experience? And then – just to make it really challenging – put the whole thing on wheels?

“Field Trip To Mars” is the barrier-shattering outcome of this ambitious mission.

 

From DSC:
This is incredible! Very well done. The visual experience tracks the corresponding speeds of the bus and even turns of the bus.

 

 

 

lockheed-fieldtriptomarsfall2016

 

 

Ed Dept. Launches $680,000 Augmented and Virtual Reality Challenge — from thejournal.com by David Nagel

Excerpt:

The United States Department of Education (ED) has formally kicked off a new competition designed to encourage the development of virtual and augmented reality concepts for education.

Dubbed the EdSim Challenge, the competition is aimed squarely at developing students’ career and technical skills — it’s funded through the Carl D. Perkins Career and Technical Education Act of 2006 — and calls on developers and ed tech organizations to develop concepts for “computer-generated virtual and augmented reality educational experiences that combine existing and future technologies with skill-building content and assessment. Collaboration is encouraged among the developer community to make aspects of simulations available through open source licenses and low-cost shareable components. ED is most interested in simulations that pair the engagement of commercial games with educational content that transfers academic, technical, and employability skills.”

 

 

 

Virtual reality boosts students’ results — from raconteur.net b
Virtual and augmented reality can enable teaching and training in situations which would otherwise be too hazardous, costly or even impossible in the real world

Excerpt:

More recently, though, the concept described in Aristotle’s Nicomachean Ethics has been bolstered by further scientific evidence. Last year, a University of Chicago study found that students who physically experience scientific concepts, such as the angular momentum acting on a bicycle wheel spinning on an axel that they’re holding, understand them more deeply and also achieve significantly improved scores in tests.

 

 

 

 

 

 

 

Virtual and augmented reality are shaking up sectors — from raconteur.net by Sophie Charara
Both virtual and augmented reality have huge potential to leap from visual entertainment to transform the industrial and service sectors

 

 

 

 

Microsoft’s HoloLens could power tanks on a battlefield — from theverge.com by Tom Warren

Excerpt:

Microsoft might not have envisioned its HoloLens headset as a war helmet, but that’s not stopping Ukrainian company LimpidArmor from experimenting. Defence Blog reports that LimpidArmor has started testing military equipment that includes a helmet with Microsoft’s HoloLens headset integrated into it.

The helmet is designed for tank commanders to use alongside a Circular Review System (CRS) of cameras located on the sides of armored vehicles. Microsoft’s HoloLens gathers feeds from the cameras outside to display them in the headset as a full 360-degree view. The system even includes automatic target tracking, and the ability to highlight enemy and allied soldiers and positions.

 

 

 

Bring your VR to work — from itproportal.com by Timo Elliott, Josh Waddell 4 hours ago
With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer.

Excerpt:

With all the hype, there’s surprisingly little discussion of the latent business value which VR and AR offer — and that’s a blind spot that companies and CIOs can’t afford to have. It hasn’t been that long since consumer demand for the iPhone and iPad forced companies, grumbling all the way, into finding business cases for them. Gartner has said that the next five to ten years will bring “transparently immersive experiences” to the workplace. They believe this will introduce “more transparency between people, businesses, and things” and help make technology “more adaptive, contextual, and fluid.”

If digitally enhanced reality generates even half as much consumer enthusiasm as smartphones and tablets, you can expect to see a new wave of consumerisation of IT as employees who have embraced VR and AR at home insist on bringing it to the workplace. This wave of consumerisation could have an even greater impact than the last one. Rather than risk being blindsided for a second time, organisations would be well advised to take a proactive approach and be ready with potential business uses for VR and AR technologies by the time they invade the enterprise.

 

In Gartner’s latest emerging technologies hype cycle, Virtual Reality is already on the Slope of Enlightenment, with Augmented Reality following closely.

 

 

 

VR’s higher-ed adoption starts with student creation — from edsurge.com by George Lorenzo

Excerpt:

One place where students are literally immersed in VR is at Carnegie Mellon University’s Entertainment Technology Center (ETC). ETC offers a two-year Master of Entertainment Technology program (MET) launched in 1998 and cofounded by the late Randy Pausch, author of “The Last Lecture.”

MET starts with an intense boot camp called the “immersion semester” in which students take a Building Virtual Worlds (BVW) course, a leadership course, along with courses in improvisational acting, and visual storytelling. Pioneered by Pausch, BVW challenges students in small teams to create virtual reality worlds quickly over a period of two weeks, culminating in a presentation festival every December.

 

 

Apple patents augmented reality mapping system for iPhone — from appleinsider.com by Mikey Campbell
Apple on Tuesday was granted a patent detailing an augmented reality mapping system that harnesses iPhone hardware to overlay visual enhancements onto live video, lending credence to recent rumors suggesting the company plans to implement an iOS-based AR strategy in the near future.

 

 

A bug in the matrix: virtual reality will change our lives. But will it also harm us? — from theguardian.stfi.re
Prejudice, harassment and hate speech have crept from the real world into the digital realm. For virtual reality to succeed, it will have to tackle this from the start

 

 

 

The latest Disney Research innovation lets you feel the rain in virtual reality — from haptic.al by Deniz Ergurel

Excerpt:

Virtual reality is a combination of life-like images, effects and sounds that creates an imaginary world in front of our eyes.

But what if we could also imitate more complex sensations like the feeling of falling rain, a beating heart or a cat walking? What if we could distinguish, between a light sprinkle and a heavy downpour in a virtual experience?

Disney Research?—?a network of research laboratories supporting The Walt Disney Company, has announced the development of a 360-degree virtual reality application offering a library of feel effects and full body sensations.

 

 

Relive unforgettable moments in history through Timelooper APP. | Virtual reality on your smartphone.

 

timelooper-nov2016

 

 

Literature class meets virtual reality — from blog.cospaces.io by Susanne Krause
Not every student finds it easy to let a novel come to life in their imagination. Could virtual reality help? Tiffany Capers gave it a try: She let her 7th graders build settings from Lois Lowry’s “The Giver” with CoSpaces and explore them in virtual reality. And: they loved it.

 

 

 

 

learningvocabinvr-nov2016

 

 

 

James Bay students learn Cree syllabics in virtual reality — from cbc.ca by Celina Wapachee and Jaime Little
New program teaches syllabics inside immersive world, with friendly dogs and archery

 

 

 

VRMark will tell you if your PC is ready for Virtual Reality — from engadget.com by Sean Buckley
Benchmark before you buy.

 

 

Forbidden City Brings Archaeology to Life With Virtual Reality — from wsj.com

 

 

holo.study

hololensdemos-nov2016

 

 

Will virtual reality change the way I see history? — from bbc.co.uk

 

 

 

Scientists can now explore cells in virtual reality — from mashable.com by Ariel Bogle

Excerpt:

After generations of peering into a microscope to examine cells, scientists could simply stroll straight through one.

Calling his project the “stuff of science fiction,” director of the 3D Visualisation Aesthetics Lab at the University of New South Wales (UNSW) John McGhee is letting people come face-to-face with a breast cancer cell.

 

 

 

 

Can Virtual Reality Make Us Care More? — from huffingtonpost.co.uk by Alex Handy

Excerpt:

In contrast, VR has been described as the “ultimate empathy machine.” It gives us a way to virtually put us in someone else’s shoes and experience the world the way they do.

 

 

 

Stanford researchers release virtual reality simulation that transports users to ocean of the future — from news.stanford.edu by Rob Jordan
Free science education software, available to anyone with virtual reality gear, holds promise for spreading awareness and inspiring action on the pressing issue of ocean acidification.

 

 

 

 

The High-end VR Room of the Future Looks Like This — from uploadvr.com by Sarah Downey

Excerpt:

This isn’t meant to be an exhaustive list, but if I missed something major, please tell me and I’ll add it. Also, please reach out if you’re working on anything cool in this space à sarah(at)accomplice(dot)co.

Hand and finger tracking, gesture interfaces, and grip simulation:

AR and VR viewers:

Omnidirectional treadmills:

Haptic feedback bodysuits:

Brain-computer interfaces:

Neural plugins:

  • The Matrix (film)
  • Sword Art Online (TV show)
  • Neuromancer (novel)
  • Total Recall (film)
  • Avatar (film)

3D tracking, capture, and/or rendering:

Eye tracking:

 VR audio:

Scent creation:

 

 

 

How chatbots will change the face of campus technology — from by Jami Morshed

Excerpt:

In the first few months of the new semester hubbub, what if there was an assistant at the beck and call of students to help them navigate the college process? While the campus faculty and staff are likely too busy during those first few days to answer all the questions on students and parent’s minds, chatbots – akin to Siri, Cortana, and Alexa – could provide the ideal digital assistant to make not only these first few days run smoothly, but also the student’s entire time on campus.

From applying to college, to arriving on campus, declaring a major, signing up courses and eventually graduation, there are a multitude of ways bots can help to streamline the process, maybe as soon as next semester.

For example, during the application process, a bot could send push notifications to students to remind them about upcoming deadlines, missing documents, or improperly submitted data, and would be available 24/7 to answer student’s questions such as “Am I missing any documents for my application?” or “What’s the deadline for submitting the application fee?”.

 

 

The Ultimate Guide to Chatbots: Why they’re disrupting UX and best practices for building — from medium.muz by Joe Toscano

Excerpt (emphasis DSC):

The incredible potential of chatbots lies in the ability to individually and contextually communicate one-to-many.

Right now contextually communicating with bots isn’t something that’s reasonable to ask across the board but there are a few that are doing it well, and I believe this type of interaction will be the standard in the future.

While chatbots are still in their infancy in terms of creative potential, it’s still a very exciting time for creatives trying to understand the best way to use this new technology and how to build the best bot possible.

Stop wasting money trying to pull people into your ecosystem. Push your content where your users are already active.

 

 

Google Assistant bot ecosystem will open to all developers by end of 2016 — from venturebeat.com by Khari Johnson

Excerpt:

Developers and the rest of the world will soon be able to make bots that interact with Google Assistant and new Google devices made public, the company said today in a special presentation in San Francisco.

“The Google Assistant will be our next thriving open ecosystem,” said Scott Huffman, lead engineer of Google Assistant.

The creation of bots for Google Assistant will be possible through Actions on Google, which is due out by early December. A software development kit (SDK) that brings Google Assistant into a range of device not made by Google is due out next year.

 

 

First Computer to Match Humans in Conversational Speech Recognition — from technologyreview.com
Human-level speech recognition has been a long time coming.

 

 

Chat bots: How talking to your apps became the next big thing — from zdnet.com by Steve Ranger
Apps that can mimic human conversations are one of the hottest technologies around right now. Here’s why.

Excerpt:

Bots are applications that are designed to respond to conversational language. The aim is to create services — whether that’s the ability to order a pizza or to enter a meeting in a calendar — where the dialogue with the app is as natural and apparently unscripted as an interaction you might have with a human.

Chat bots are like narrow versions of digital assistants like Apple’s Siri, Amazon’s Alexa or Google Assistant, designed to perform specific tasks. Interest in bots has rocketed recently and developers are racing to incorporate them into services built on popular messaging apps and websites to create a form of virtual customer services.

 

 

 

From DSC:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares.  The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place). 

Such reflections always make me ask:

  • Who should be involved in some of these decisions?
  • Who is currently getting asked to the decision-making tables for such discussions?
  • How does the average citizen participate in such discussions?

Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.

 


 

Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.

Excerpt:

Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.

 

 

IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season

Excerpt:

With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.

Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.

 

 

How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.

Excerpt:

Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.

Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.

A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”

 

 

Can we build AI without losing control over it? — from ted.com by Sam Harris

Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

 

 

Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Excerpt:

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

 

 

World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou

Excerpt:

It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”

The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.

 

 

Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab

Excerpt:

Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.

 

 

 

Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.

Excerpt:

A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.

A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.

In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.

Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?

 

 

“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.

About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.

This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.

Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.

 

 

 

Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal

Excerpt:

We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”

 

 

Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry

Excerpt:

2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.

Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.

It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.

 

What would happen if gene editing fell into the wrong hands?

 

 

 

Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?

Excerpt:

Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.

 

 



Addendum on 11/1/16:



An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?

Excerpt:

Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.

The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?

This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.

 

 

The gigantic list of augmented reality use cases — from uploadvr.com by Sarah Downey

Excerpt:

This gigantic list of future AR use cases should get you reeling with the possibilities. Although most of these are future applications, they’ll arrive within the next 10 to 15 years. Let’s make this a living document: if I missed a major use of AR, comment below and I’ll add it.

Three quick notes before we start. First, let’s clarify the difference between AR and VR.

VR blocks out the real world and immerses the user in a digital experience.

If you’re putting on a headset in your living room and suddenly transported to a zombie attack scenario, that’s VR.

AR adds digital elements on top of the real world.

If you’re walking down the street in real life and a Dragonite pops up on the sidewalk, that’s AR.

AR is more focused on bridging digital and physical spaces. It accompanies you as you move through the world and augments your activities with information, whereas you typically step out of that world to immerse yourself in VR. When we talk about AR headsets, they’re big helmet-like visors now, but they’re heading toward normal-looking glasses and ultimately contact lenses.

Third, AR and VR are converging into the same thing. Our ability to tell what’s “real” and not digital will decrease as graphics get increasingly better. An AR sign outside a building could be as real and as significant as a physical one if everyone is using AR tech, everyone sees it when they walk by, and it is persistent in that location. Ultimately, we’ll have hardware that lets us switch between AR and VR modes, with less and more opacity for the context. We’re discussing AR and VR distinctly for now because they’re developing separately and on different timelines.

 

 

 

 

 

 

Augmented Reality: Top 100 Influencers and Brands — from onalytica.com

topbrandsar-2016

 

 

Augmented reality is going from ‘Pokémon Go’ to the factory floor — from businessinsider.com by Matt Weinberger

Excerpt:

That data, readily available from other sources, is just the tip of the iceberg, though, Campbell says. It also overlays a graphic showing how the pieces fit together, how to disassemble it, and what other pieces of the machine that part might connect to. It combines the physical world of the machine part with the digital world of the IoT-gleaned info.

Just being able to look at a machine, and say, yes, this is the one that needs work, and this is the work that needs doing, can vastly improve the speed with which work gets done, and thus operational efficiency of the whole enterprise.

“There’s so much value in visualizing information,” Campbell says.

 

 

 

Virtual Reality Changes Global Engineering Schools — from by Ilana Kowarski
Engineering professors say virtual reality contributes to the student experience.

Excerpt:

At engineering schools throughout the world, professors are turning to virtual reality technology in the classroom.

The technology provides 3-D visuals that help engineering students improve their designs, alerting them to flaws before the building process starts.

Engineering schools are researching technologies that could transform the way people communicate and interact by – for instance – allowing people to visit one another in a virtual space if they can’t meet in person. Engineering schools are also exploring medical applications of virtual reality that could save lives, such as 3-D X-rays that allow doctors to peer inside the bodies of patients.

Some engineering schools are taking virtual reality lessons a step further and challenging students to develop new virtual reality programs.

 

 

 

 

 

Broward Students Learning Through Augmented Reality — from nbcmiami.com by Ari Odzer

Excerpt:

We hear about technology’s impact on education all the time. Usually, that means computers, new apps, or 3D printers. Now there’s a new tool that has the promise of revolution, the potential for creating a new paradigm in how students learn. It’s called augmented reality.

 

 

Augmented Reality And Kinect Create Unique Art Experience At Cleveland Museum — from forbes.com by Jennifer Hicks

 

 

 



Addendum on 11/1/16:

 



 

 

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

whydeeplearningchangingyourlife-sept2016

 

Why deep learning is suddenly changing your life — from fortune.com by Roger Parloff

Excerpt:

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

 

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

 

 

ai-machinelearning-deeplearning-relationship-roger-fall2016

 

 

Graphically speaking:

 

ai-machinelearning-deeplearning-relationship-fall2016

 

 

 

“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

 

 

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

 

 

 

 

Microsoft just democratized virtual reality with $299 headsets — from pcworld.com by Gordon Mah Ung

Excerpt:

VR just got a lot cheaper.

Microsoft on Wednesday morning said PC OEMs will soon be shipping VR headsets that enable virtual reality and mixed reality starting at $299.

Details of the hardware and how it works were sparse, but Microsoft said HP, Dell, Lenovo, Asus, and Acer will be shipping the headsets timed with its upcoming Windows 10 Creators Update, due in spring 2017.

Despite the relatively low price, the upcoming headsets may have a big advantage over HTC and Valve’s Vive and Facebook’s Oculus Rift: no need for separate calibration hardware to function. Both Vive and Oculus require multiple emitters on stands to be placed around a room for the positioning to function.

 

microsoft-299-vr-headsets-10-26-16

 

 

 

IBM Watson Education and Pearson to drive cognitive learning experiences for college students — from prnewswire.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE: IBM) and Pearson (FTSE: PSON) the world’s learning company, today announced a new global education alliance intended to make Watson’s cognitive capabilities available to millions of college students and professors.

Combining IBM’s cognitive capabilities with Pearson’s digital learning products will give students a more immersive learning experience with their college courses, an easy way to get help and insights when they need it, all through asking questions in natural language just like they would with another student or professor. Importantly, it provides instructors with insights about how well students are learning, allowing them to better manage the entire course and flag students who need additional help.

For example, a student experiencing difficulty while studying for a biology course can query Watson, which is embedded in the Pearson courseware. Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

 

 

ibm-watson-2016

 

 

Udacity partners with IBM Watson to launch the AI Nanodegree — from venturebeat.com by Paul Sawers

Excerpt:

Online education platform Udacity has partnered with IBM Watson to launch a new artificial intelligence (AI) Nanodegree program.

Costing $1,600 for the full two-term, 26-week course, the AI Nanodegree covers a myriad of topics including logic and planning, probabilistic inference, game-playing / search, computer vision, cognitive systems, and natural language processing (NLP). It’s worth noting here that Udacity already offers an Intro to Artificial Intelligence (free) course and the Machine Learning Engineer Nanodegree, but with the A.I. Nanodegree program IBM Watson is seeking to help give developers a “foundational understanding of artificial intelligence,” while also helping graduates identify job opportunities in the space.

 

 

The Future Cognitive Workforce Part 1: Announcing the AI Nanodegree with Udacity — from ibm.com by Rob High

Excerpt:

As artificial intelligence (AI) begins to power more technology across industries, it’s been truly exciting to see what our community of developers can create with Watson. Developers are inspiring us to advance the technology that is transforming society, and they are the reason why such a wide variety of businesses are bringing cognitive solutions to market.

With AI becoming more ubiquitous in the technology we use every day, developers need to continue to sharpen their cognitive computing skills. They are seeking ways to gain a competitive edge in a workforce that increasingly needs professionals who understand how to build AI solutions.

It is for this reason that today at World of Watson in Las Vegas we announced with Udacity the introduction of a Nanodegree program that incorporates expertise from IBM Watson and covers the basics of artificial intelligence. The “AI Nanodegree” program will be helpful for those looking to establish a foundational understanding of artificial intelligence. IBM will also help aid graduates of this program with identifying job opportunities.

 

 

The Future Cognitive Workforce Part 2: Teaching the Next Generation of Builders — from ibm.com by Steve Abrams

Excerpt:

Announced today at World of Watson, and as Rob High outlined in the first post in this series, IBM has partnered with Udacity to develop a nanodegree in artificial intelligence. Rob discussed IBM’s commitment to empowering developers to learn more about cognitive computing and equipping them with the educational resources they need to build their careers in AI.

To continue on this commitment, I’m excited to announce another new program today geared at college students that we’ve launched with Kivuto Solutions, an academic software distributor. Via Kivuto’s popular digital resource management platform, students and academics around the world will now gain free access to the complete IBM Bluemix Portfolio — and specifically, Watson. This offers students and faculty at any accredited university – as well as community colleges and high schools with STEM programs – an easy way to tap into Watson services. Through this access, teachers will also gain a better means to create curriculum around subjects like AI.

 

 

 

IBM introduces new Watson solutions for professions — from finance.yahoo.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE:IBM) today unveiled a series of new cognitive solutions intended for professionals in marketing, commerce, supply chain and human resources. With these new offerings, IBM is enabling organizations across all industries and of all sizes to integrate new cognitive capabilities into their businesses.

Watson solutions learn in an expert way, which is critical for professionals that want to uncover insights hidden in their massive amounts of data to understand, reason and learn about their customers and important business processes. Helping professionals augment their existing knowledge and experience without needing to engage a data analyst empowers them to make more informed business decisions, spot opportunities and take action with confidence.

“IBM is bringing Watson cognitive capabilities to millions of professionals around the world, putting a trusted advisor and personal analyst at their fingertips,” said Harriet Green, general manager Watson IoT, Cognitive Engagement & Education. “Similar to the value that Watson has brought to the world of healthcare, cognitive capabilities will be extended to professionals in new areas, helping them harness the value of the data being generated in their industries and use it in new ways.”

 

 

 

IBM says new Watson Data Platform will ‘bring machine learning to the masses’ — from techrepublic.com by Hope Reese
On Tuesday, IBM unveiled a cloud-based AI engine to help businesses harness machine learning. It aims to give everyone, from CEOs to developers, a simple platform to interpret and collaborate on data.

Excerpt:

“Insight is the new currency for success,” said Bob Picciano, senior vice president at IBM Analytics. “And Watson is the supercharger for the insight economy.”

Picciano, speaking at the World of Watson conference in Las Vegas on Tuesday, unveiled IBM’s Watson Data Platform, touted as the “world’s fastest data ingestion engine and machine learning as a service.”

The cloud-based Watson Data Platform, will “illuminate dark data,” said Picciano, and will “change everything—absolutely everything—for everyone.”

 

 

 

See the #IBMWoW hashtag on Twitter for more news/announcements coming from IBM this week:

 

ibm-wow-hashtag-oct2016

 

 

 

 

Previous postings from earlier this month:

 

  • IBM launches industry first Cognitive-IoT ‘Collaboratory’ for clients and partners
    Excerpt:
    IBM have unveiled an €180 million investment in a new global headquarters to house its Watson Internet of Things business.  Located in Munich, the facility will promote new IoT capabilities around Blockchain and security as well as supporting the array of clients that are driving real outcomes by using Watson IoT technologies, drawing insights from billions of sensors embedded in machines, cars, drones, ball bearings, pieces of equipment and even hospitals. As part of a global investment designed to bring Watson cognitive computing to IoT, IBM has allocated more than $200 million USD to its global Watson IoT headquarters in Munich. The investment, one of the company’s largest ever in Europe, is in response to escalating demand from customers who are looking to transform their operations using a combination of IoT and Artificial Intelligence technologies. Currently IBM has 6,000 clients globally who are tapping Watson IoT solutions and services, up from 4,000 just 8 months ago.

 

 

cognitiveapproachhr-oct2016

 

 

 

 

 

These VR apps are designed to replace your office and daily commute — from uploadvr.com by David Matthews

Excerpt:

Eric Florenzano is a VR consultant and game designer who lives in the San Francisco Bay area. He is currently working on new game ideas with a small team spread out across the US.

So far, so normal, right?. But what you don’t know is that Florenzano is one of a handful of advocates pioneering something they claim could transform work, end commuting, and even lead to a mass exodus from large cities: the virtual office.

“There’s no physical office [for us.] It’s all virtual. That’s the crazy thing,” explains Florenzano. Rather than meeting in person or arranging a conference call, his team jumps into Bigscreen, which allows users, who are represented by floating heads and controllers, to share their monitors in virtual rooms.

 

uploadvrimage-oct2016

 

Also see:

 

bigscreen_rocket_league

 

 

How to train thousands of surgeons at the same time in virtual reality — from singularity.com by Sveta McShane

Excerpt:

Recently, I wrote about how the future of surgery is going to be robotic, data-driven and artificially intelligent.

Although it’s approaching fast, that future is still in the works. In the meantime, there is a real need to train surgeons in a more scalable way, according to Dr. Shafi Ahmed, a surgeon at the Royal London and St. Bartholomew’s hospitals and cofounder of Medical Realities, a company developing a new virtual reality platform for surgical training.

In April of 2016, he live-streamed a cancer surgery in virtual reality. The procedure, a low-risk removal of a colon tumor in a man in his 70s, was filmed in 360 video and streamed live across the world. The high-def 4K camera captured the doctors’ every movement, and those watching could see everything that was happening in immersive detail.

 

 

Duke neurosurgeons test Hololens as an AR assist on tricky procedures — from techcrunch.com by Devin Coldewey,

Excerpt:

“Since we can manipulate a hologram without actually touching anything, we have access to everything we need without breaking a sterile field. In the end, this is actually an improvement over the current OR system because the image is directly overlaid on the patient, without having to look to computer screens for aid,” said Cutler in a Duke news release.

 

 

OTOY Enables Groundbreaking VR Social Features — from uploadvr.com

Excerpt:

Oculus and OTOY may have achieved a breakthrough in social VR functionality.

VR headset owners should soon be able to share a variety of environments and Web-based content with one another in virtual reality. For example, friends can feel like they are together on the bridge of the Enterprise, and on the viewscreen of the ship they see a list of Star Trek episodes to watch with one another.

We have yet to test all of this functionality first-hand, but we’ve seen some of it live in the Gear VR — accessing, for example, a Star Trek environment inside OTOY’s ORBX Media Player app from within the Oculus Social Beta.

 

 

 

 

VR just got a lot more stylish with the Dlodlo V1 Glasses — from seriouswonder.com by B.J. Murphy

 

dlodlovr-glasses-oct2016

 

 

Microsoft CEO says mixed reality is the ‘ultimate computer’ — from engadget.com by Nicole Lee
The company’s goal is to “invent new computers and new computing.”

Excerpt:

“Whether it be HoloLens, mixed reality, or Surface, our goal is to invent new computers and new computing,” he added. This also includes investing in artificial intelligence, which is now its own group within the company.

Nadella admitted that for a long time, Microsoft was complacent. “Early success is probably the worst thing that can happen in life,” he said. But now, he wants Microsoft to be more of a “learn-it-all” culture rather than a “know-it-all” culture.

 

 

A Chinese Lens on Augmented, Virtual and Mixed Reality — from adage.com by David Berkowitz

Excerpt:

These networks keep growing. One of the hosts of the conference, ARinChina, brought me over along with a group of about a half-dozen Westerners. This media company connects a community of 60,000 developers, all of whom are invested in staying ahead of breakthrough technologies like virtual reality (VR), augmented reality (AR) and the hybrid known as mixed reality (MR). The AR track where I presented was hosted by RAVV, a new technology think tank that is pulling together subject matter experts across robotics, artificial intelligence, autonomous vehicles, VR and AR. RAVV is building an international ecosystem that includes its own approaches for startup incubation, knowledge sharing and other collaborative endeavors.

To get a sense of how global the emerging mixed reality field is, consider that, in February, China’s e-commerce giant Alibaba led the $800 million Series C round for Florida-based Magic Leap, an MR startup. As our daily reality becomes more virtual and augmented, it doesn’t matter where someone is on the map. This field is connecting far-flung practitioners, hinting at a time, soon, when AR, VR and MR will connect people in ways never before possible.

 

 


Addendum 10/25/16:

 

 

 
© 2025 | Daniel Christian