From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 
 

From DSC:
Here’s an idea that came to my mind the other day as I was walking by a person who was trying to put some books back onto the shelves within our library.

 

danielchristian-books-sensors-m2m-oct2016

 

 

From DSC:
Perhaps this idea is not very timely…as many collections of books will likely continue to be digitized and made available electronically. But preservation is still a goal for many libraries out there.

 

 

Also see:

IoT and the Campus of Things — from er.educause.edu by

Excerpt:

Today, the IoT sits at the peak of Gartner’s Hype Cycle. It’s probably not surprising that industry is abuzz with the promise of streaming sensor data. The oft quoted “50 billion connected devices by 2020!” has become a rallying cry for technology analysts, chip vendors, network providers, and other proponents of a deeply connected, communicating world. What is surprising is that academia has been relatively slow to join the parade, particularly when the potential impacts are so exciting. Like most organizations that manage significant facilities, universities stand to benefit by adopting the IoT as part of their management strategy. The IoT also affords new opportunities to improve the customer experience. For universities, this means the ability to provide new student services and improve on those already offered. Perhaps most surprisingly, the IoT represents an opportunity to better engage a diverse student base in computer science and engineering, and to amplify these programs through meaningful interdisciplinary collaboration.

The potential benefits of the IoT to the academic community extend beyond facilities management to improving our students’ experience. The lowest hanging fruit can be harvested by adapting some of the smart city applications that have emerged. What student hasn’t shown up late to class after circling the parking lot looking for a space? Ask any student at a major university if it would improve their campus experience to be able to check on their smart phones which parking spots were available. The answer will be a resounding “yes!” and there’s nothing futuristic about it. IoT parking management systems are commercially available through a number of vendors. This same type of technology can be adapted to enable students to find open meeting rooms, computer facilities, or café seating. What might be really exciting for students living in campus dormitories: A guarantee that they’ll never walk down three flights of stairs balancing two loads of dirty laundry to find that none of the washing machines are available. On many campuses, the washing machines are already network-connected to support electronic payment; availability reporting is a straightforward extension.

 

 

Also see:

2016 Innovators Awards | A Location-Aware App for Exploring the Library — from campustechnology.com by Meg Lloyd
To help users access rich information resources on campus, the University of Oklahoma Libraries created a mobile app with location-based navigation and “hyperlocal” content.

Category: Education Futurists

Institution: University of Oklahoma

Project: OU Libraries NavApp

Project lead: Matt Cook, emerging technologies librarian

Tech lineup: Aruba, Meridian, RFIP

 

 

Somewhat related:

 

 

 

 

From DSC:
Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)

The educational benefits — as well as the business/profit-related benefits will certainly be significant!

For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices. (Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)

 


Some use cases for such an app:


Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!

They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.

 

girl
Above image via shutterstock.com

 

horticulturalapp-danielchristian

 

In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc.  in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.

 

horticulturalapp2-danielchristian

 

Or let’s look at the potential uses of this type of app from some different angles.

Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have any Eastern Poison Ivy in it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivy for you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).

 

easternpoisonivy

 

 

Or consider another use of such an app:

  • A homeowner who wants to get rid of a certain kind of weed.  The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
  • Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.

 

Or consider another use of such an app:

  • A homeowner has a diseased tree, and they want to know what to do about it. The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
  • Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.

 

Or consider other/similar apps along these lines:

  • Skin ML (for detecting any issues re: acme, skin cancers, etc.)
  • Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
  • Fish ML
  • Etc.

fish-ml-gettyimages

Image from gettyimages.com

 

So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.

 


*  From Wikipedia:

Horticulture involves nine areas of study, which can be grouped into two broad sections: ornamentals and edibles:

  1. Arboriculture is the study of, and the selection, plant, care, and removal of, individual trees, shrubs, vines, and other perennial woody plants.
  2. Turf management includes all aspects of the production and maintenance of turf grass for sports, leisure use or amenity use.
  3. Floriculture includes the production and marketing of floral crops.
  4. Landscape horticulture includes the production, marketing and maintenance of landscape plants.
  5. Olericulture includes the production and marketing of vegetables.
  6. Pomology includes the production and marketing of pome fruits.
  7. Viticulture includes the production and marketing of grapes.
  8. Oenology includes all aspects of wine and winemaking.
  9. Postharvest physiology involves maintaining the quality of and preventing the spoilage of plants and animals.

 

 

 

 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

Amazon is winning the race to the future — from bizjournals.com by

Excerpt:

This is the week when artificially intelligent assistants start getting serious.

On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.

But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.

The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.

 

amazon-newdot-oct2016

 

 

100 bot people to watch #BotWatch #1 — from chatbotsmagazine.com

Excerpt:

100 people to watch in the bot space, in no order.

I’ll publish a new list once a month. This one is #1 October 2016.

This is my personal top 100 for people to watch in the bot space.

 

 

Should We Give Chatbots Their Own Personalities? — from re-work.com by Sophie Curtis

Excerpt:

Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.

Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.

Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.

 

 

 

How I built and launched my first chatbot in hours — from chatbotsmagazine.com by Max Pelzner
From idea to MVB (Minimum Viable Bot), and launched in 24 hours!

 

 

 

Developing a Chatbot? Do Not Make These Mistakes! — from chatbotsmagazine.com Hira Saeed

 

 

 

This is what an A.I.-powered future looks like — from venturebeat.com by Grayson Brulte

Excerpt:

Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.

A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.

Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.

 

 

Artificial intelligence created to fold laundry for you — from geek.com by Matthew Humphries

Excerpt:

So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.

Laundry folding is actually a five stage process, including:

Grabbing
Spreading
Recognizing
Folding
Sorting/Storing

The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.

 

 

 

 

 

 

2 days of global chatbot experts at Talkabot in 12 minutes — from chatbotsmagazine.com by Alec Lazarescu

Excerpt:

During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.

As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.

In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.

 

 

 


Addendum:


 

alexaprize-2016

The Alexa Prize (emphasis DSC)

The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.

Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.

 

 

 

vrinclassroom-usnews-oct2016

 

Virtual Reality in the Classroom — from usnews.com by Charles Sahm
Using virtual reality as an educational tool could transform the American high school experience.

Excerpt:

Listening to Andrew describe the potential of virtual reality tools to improve education is thrilling. He talks about the evolution of a student reading about France in a textbook, to watching a YouTube video about France, to, via virtual reality, being able to walk the streets of Paris. He imagines students not only being able to read about the Constitutional Convention, but to actually be in “the room where it happens.” (Andrew, like many, is enamored of the musical “Hamilton.”)

Andrew acknowledges, however, that virtual reality as an educational tool is still in the very early stages. Washington Leadership Academy intends to develop a number of programs and then share them with other schools. It is exciting to consider what could be accomplished if the power of virtual reality were harnessed for education rather than gaming; if developers turned their resources away from creating games that teach children how to steal cars and kill people and toward allowing them to explore history, science, art and other subjects in innovative new ways.

 

 

 

 

Google welcomes the future of mobile VR with its $79 Daydream View VR headset — from techcrunch.com by Lucas Matney

Excerpt:

Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.

Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.

The headset and the Google Daydream platform will launch in November.

 

 

 

 

Here’s the Google Pixel — from techcrunch.com by Brian Heater

Excerpt:

While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”

 

 

 

 

 

 

 

Google Home will go on sale today for $129, shipping November 4 — from techcrunch.com by Frederic Lardinois

Excerpt:

Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.

Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.

Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.

 

 

 

 

Google Working with Netflix, HBO & Hulu for Daydream Content— from vrfocus.com by Kevin Joyce
#madebygoogle reveals services ready and on the way to support Google Daydream

Excerpt:

Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.

 

 

Google officially unveils $649 Pixel phone with unlimited storage; $129 Google Home — from cnbc.com by Anita Balakrishnan

 

 

 

Google Unveils ‘Home,’ Embraces Aggressive Shift To Hardware — from forbes.com by Matt Drange

Excerpt:

You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.

On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.

Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.

 

 

 

 


A comment from DC:
More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?

Natural language processing, cognitive computing, and artificial intelligence continue their march forward.


 

Addendums:

 

trojanhorse4ai-googleoct2016

 

 

googleassistanteverywhere-oct2016

 

 

 

9 Best Augmented Reality Smart Glasses 2016 — from appcessories.co.uk

Excerpt:

2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.

Some of them are already out while others are in development.

 

 

The holy grail of Virtual Reality: A complete suspension of disbelief — from labster.com by Marian Reed

Excerpt:

It’s no secret that we here at Labster are pretty excited about VR.  However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.

 

 

 

 

Computer science researchers create augmented reality education tool — from ucalgary.ca by Erin Guiltenane

Excerpt (emphasis DSC):

Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.

 

holocell-sept2016

 

 

 

Upload, Google, HTC and Udacity join forces for new VR education program — from  uploadvr.com

Excerpt:

Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.

According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.

The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.

 

 

 

Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse
Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities

Excerpt:

German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.

Also related to this:

Auschwitz war criminals targeted with help of virtual reality — from jpost.com by

Excerpt:

German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.

Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.

 

 

 

How the UN thinks virtual reality could not only build empathy, but catalyze change, too — from yahoo.com by Lulu Chang

Excerpt:

Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.

Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films.  First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.

 

 

 

Occipital Wants to Turn iPhones into Mixed Virtual Reality Headsets — from next.reality.news by Adam Dachis

Excerpt:

If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.

 

occipital-10-2-16

 

 

‘The Body VR’ Brings Educational Tour Of The Human Body To HTC Vive Today — from uploadvr.com by Jamie Feltham on October 3rd, 2016

 Excerpt:

The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.

The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.

 

 

 

 

Virtual Reality Dazzles Harvard University — from universityherald.com

Excerpt:

For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.

All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.

Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”

“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.

 

 


 

Addendum on 10/6/16:

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 

From chatbots to Einstein, artificial intelligence as a service — from infoworld.com by Yves de Montcheuil

Excerpt:

The recent announcement of Salesforce Einstein — dubbed “artificial intelligence for everyone” — sheds new light on the new and pervasive usage of artificial intelligence in every aspect of businesses.

 

Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customized for every single customer, and it will learn, self-tune, and get smarter with every interaction and additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.

 


Chatbots, or conversational bots, are the “other” trending topic in the field of artificial intelligence. At the juncture of consumer and business, they provide the ability for an AI-based system to interact with users through a headless interface. It does not matter whether a messaging app is used, or a speech-to-text system, or even another app — the chatbot is front-end agnostic.

Since the user does not have the ability to provide context around the discussion, he just asks questions in natural language to an AI-driven backend that is tasked with figuring this context and looking for the right answer.

 

 

IBM is launching a much-awaited ‘Watson’ recruiting tool — from eremedia.com by Todd Raphael

Excerpt:

For many months IBM has gone to recruiting-industry conferences to say that the famous Watson will be at some point used for talent-acquisition, but that it hasn’t happened quite yet.

It’s here.

IBM is first using Watson for its RPO customers, and then rolling it out as a product for the larger community, perhaps next spring. One of my IBM contacts, Recruitment Innovation Global Leader Yates Baker, tells me that the current version is a work in progress like the first iPhone (or perhaps like that Siri-for-recruiting tool).

There are three parts: recruiting, marketing, and sourcing.

 

watsonrecruitingtool-sept2016

 

 

Apple’s Siri: A Lot Smarter, but Still Kind of Dumb — from wsj.com by Joanna Stern
With the new MacOS and Apple’s AirPods, Siri’s more powerful than ever, but still not as good as some competitors

Excerpt:

With the new iOS 10, Siri can control third-party apps, like Uber and WhatsApp. With the release of MacOS Sierra on Tuesday, Siri finally lands on the desktop, where it can take care of basic operating system tasks, send emails and more. With WatchOS 3 and the new Apple Watch, Siri is finally faster on the wrist. And with Apple’s Q-tip-looking AirPods arriving in October, Siri can whisper sweet nothings in your inner ear with unprecedented wireless freedom. Think Joaquin Phoenix’s earpiece in the movie “Her.”

The groundwork is laid for an AI assistant to stake a major claim in your life, and finally save you time by doing menial tasks. But the smarter Siri becomes in some places, the dumber it seems in others—specifically compared with Google’s and Amazon’s voice assistants. If I hear “I’m sorry, Joanna, I’m afraid I can’t answer that” one more time…

 

 

 

IBM Research and MIT Collaborate to Advance Frontiers of Artificial Intelligence in Real-World Audio-Visual Comprehension Technologies — from prnewswire.com
Cross-disciplinary research approach will use insights from brain and cognitive science to advance machine understanding

Excerpt:

YORKTOWN HEIGHTS, N.Y., Sept. 20, 2016 /PRNewswire/ — IBM Research (NYSE: IBM) today announced a multi-year collaboration with the Department of Brain & Cognitive Sciences at MIT to advance the scientific field of machine vision, a core aspect of artificial intelligence. The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C) goal will be to develop cognitive computing systems that emulate the human ability to understand and integrate inputs from multiple sources of audio and visual information into a detailed computer representation of the world that can be used in a variety of computer applications in industries such as healthcare, education, and entertainment.

The BM3C will address technical challenges around both pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish. For instance, humans watching a short video of a real-world event can easily recognize and produce a verbal description of what happened in the clip as well as assess and predict the likelihood of a variety of subsequent events, but for a machine, this ability is currently impossible.

 

 

Satya Nadella on Microsoft’s new age of intelligence — from fastcompany.com by Harry McCracken
How the software giant aims to tie everything from Cortana to Office to HoloLens to Azure servers into one AI experience.

Excerpt:

“Microsoft was born to do a certain set of things. We’re about empowering people in organizations all over the world to achieve more. In today’s world, we want to use AI to achieve that.”

That’s Microsoft CEO Satya Nadella, crisply explaining the company’s artificial-intelligence vision to me this afternoon shortly after he hosted a keynote at Microsoft’s Ignite conference for IT pros in Atlanta. But even if Microsoft only pursues AI opportunities that it considers to be core to its mission, it has a remarkably broad tapestry to work with. And the examples that were part of the keynote made that clear.

 

 

 

 

IBM Foundation collaborates with AFT and education leaders to use Watson to help teachers — from finance.yahoo.com

Excerpt:

ARMONK, N.Y., Sept. 28, 2016 /PRNewswire/ — Teachers will have access to a new, first-of-its-kind, free tool using IBM’s innovative Watson cognitive technology that has been trained by teachers and designed to strengthen teachers’ instruction and improve student achievement, the IBM Foundation and the American Federation of Teachers announced today.

Hundreds of elementary school teachers across the United States are piloting Teacher Advisor with Watson – an innovative tool by the IBM Foundation that provides teachers with a complete, personalized online resource. Teacher Advisor enables teachers to deepen their knowledge of key math concepts, access high-quality vetted math lessons and acclaimed teaching strategies and gives teachers the unique ability to tailor those lessons to meet their individual classroom needs.

Litow said there are plans to make Teacher Advisor available to all elementary school teachers across the U.S. before the end of the year.

 

 

In this first phase, Teacher Advisor offers hundreds of high-quality vetted lesson plans, instructional resources, and teaching techniques, which are customized to meet the needs of individual teachers and the particular needs of their students.

 

 

Also see:

teacheradvisor-sept282016

 

Educators can also access high-quality videos on teaching techniques to master key skills and bring a lesson or teaching strategy to life into their classroom.

 

 

From DSC:
Today’s announcement involved personalization and giving customized directions, and it caused my mind to go in a slightly different direction. (IBM, Google, Microsoft, Apple, Amazon, and others like Smart Sparrow are likely also thinking about this type of direction as well. Perhaps they’re already there…I’m not sure.)

But given the advancements in machine learning/cognitive computing (where example applications include optical character recognition (OCR) and computer vision), how much longer will it be before software is able to remotely or locally “see” what a third grader wrote down for a given math problem (via character and symbol recognition) and “see” what the student’s answer was while checking over the student’s work…if the answer was incorrect, the algorithms will likely know where the student went wrong.  The software will be able to ascertain what the student did wrong and then show them how the problem should be solved (either via hints or by showing the entire problem to the student — per the teacher’s instructions/admin settings). Perhaps, via natural language processing, this process could be verbalized as well.

Further questions/thoughts/reflections then came to my mind:

  • Will we have bots that teachers can use to teach different subjects? (“Watson may even ask the teacher additional questions to refine its response, honing in on what the teacher needs to address certain challenges.)
  • Will we have bots that students can use to get the basics of a given subject/topic/equation?
  • Will instructional designers — and/or trainers in the corporate world — need to modify their skillsets to develop these types of bots?
  • Will teachers — as well as schools of education in universities and colleges — need to modify their toolboxes and their knowledgebases to take advantage of these sorts of developments?
  • How might the corporate world take advantage of these trends and technologies?
  • Will MOOCs begin to incorporate these sorts of technologies to aid in personalized learning?
  • What sorts of delivery mechanisms could be involved? Will we be tapping into learning-related bots from our living rooms or via our smartphones?

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Also see:

 

 

 

10 Incredible Uses of Virtual Reality — from fortune.com by Rose Leadem
It’s not just for video games.

Excerpt:

Virtual reality technology holds enormous potential to change the future for a number of fields, from medicine, business, architecture to manufacturing.

Psychologists and other medical professionals are using VR to heighten traditional therapy methods and find effective solutions for treatments of PTSD, anxiety and social disorders. Doctors are employing VR to train medical students in surgery, treat patients’ pains and even help paraplegics regain body functions.

In business, a variety of industries are benefiting from VR. Carmakers are creating safer vehicles, architects are constructing stronger buildings and even travel agencies are using it to simplify vacation planning.

Check out these 10 amazing uses of VR.

 

 

Visit the U.K. Prime Minister’s Home in This Virtual 10 Downing Street Experience — from uploadvr.com by

Excerpt:

Google has unveiled a new interactive online exhibit that take users on a tour of 10 Downing street in London — home of the U.K. Prime Minister.

The building has served as home to countless British political leaders, from Winston Churchill and Margaret Thatcher through to Tony Blair and — as of a few months ago — Theresa May. But, as you’d expect in today’s security-conscious age, gaining access to the residence isn’t easy; the street itself is gated off from the public. This is why the 10 Downing Street exhibit may capture the imagination of politics aficionados and history buffs from around the world.

The tour features 360-degree views of the various rooms, punctuated by photos and audio and video clips.

 

 

 

Microsoft’s HoloLens Now Helps Elevator Technicians Work Smarter — from uploadvr.com by Charles Singletary

Excerpt:

In a slightly more grounded environment, the HoloLens is being used to assist technicians in elevator repairs.

Traversal via elevator is such a regular part of our lifestyles, its importance is rarely recognized…until they’re not working as they should be. ThyssenKrupp AG, one of the largest suppliers for elevators, recognizes how essential they are as well as how the simplest malfunctions can deter the lives of millions. Announced on their blog, Microsoft is partnering with Thyssenkrupp to equip 24,000 of their technicians with HoloLens.

 

 

ms-hololens-thyssenkrupp-sept2016

Insert from DSC re: the above piece re: HoloLens:

Will technical communicators need to augment their skillsets? It appears so.

 

 

 

 

Phiona: A Virtual Reality Portrait of ‘Queen of Katwe’ — from abcnews.com by Angel Canales and Adam Rivera

 

vr-queenofkatwe-2016

 

 

Get a front-row seat in Harvard’s largest class, thanks to virtual reality — from medium.freecodecamp.com by Dhawal Shah

harvard-cs50-sep2016

Intro video here: This is CS50 2016

 

 

The future of mobile video is virtual reality — from techcrunch.com by Mike Wadhera

Excerpt:

But in a world where no moment is too small to record with a mobile sensor, and one in which time spent in virtual reality keeps going up, interesting parallels start to emerge with our smartphones and headsets.

Let’s look at how the future could play out in the real world by observing three key drivers: VR video adoption, mobile-video user needs and the smartphone camera rising tide.

 

 

Now, a virtual reality programme to improve social skills in autistic kids — from cio.economictimes.indiatimes.com by
The VR training platform creates a safe place for participants to practice social situations without the intense fear of consequence.

Excerpt:

“Individuals with autism may become overwhelmed and anxious in social situations,” research clinician Dr Nyaz Didehbani said.

“The virtual reality training platform creates a safe place for participants to practice social situations without the intense fear of consequence,” said Didehbani.

The participants who completed the training demonstrated improved social cognition skills and reported better relationships, researchers said.

 

 

 


Also see:


 

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 
© 2025 | Daniel Christian