whydeeplearningchangingyourlife-sept2016

 

Why deep learning is suddenly changing your life — from fortune.com by Roger Parloff

Excerpt:

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

 

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

 

 

ai-machinelearning-deeplearning-relationship-roger-fall2016

 

 

Graphically speaking:

 

ai-machinelearning-deeplearning-relationship-fall2016

 

 

 

“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

 

 

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

 

 

 

 

Microsoft just democratized virtual reality with $299 headsets — from pcworld.com by Gordon Mah Ung

Excerpt:

VR just got a lot cheaper.

Microsoft on Wednesday morning said PC OEMs will soon be shipping VR headsets that enable virtual reality and mixed reality starting at $299.

Details of the hardware and how it works were sparse, but Microsoft said HP, Dell, Lenovo, Asus, and Acer will be shipping the headsets timed with its upcoming Windows 10 Creators Update, due in spring 2017.

Despite the relatively low price, the upcoming headsets may have a big advantage over HTC and Valve’s Vive and Facebook’s Oculus Rift: no need for separate calibration hardware to function. Both Vive and Oculus require multiple emitters on stands to be placed around a room for the positioning to function.

 

microsoft-299-vr-headsets-10-26-16

 

 

 

IBM Watson Education and Pearson to drive cognitive learning experiences for college students — from prnewswire.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE: IBM) and Pearson (FTSE: PSON) the world’s learning company, today announced a new global education alliance intended to make Watson’s cognitive capabilities available to millions of college students and professors.

Combining IBM’s cognitive capabilities with Pearson’s digital learning products will give students a more immersive learning experience with their college courses, an easy way to get help and insights when they need it, all through asking questions in natural language just like they would with another student or professor. Importantly, it provides instructors with insights about how well students are learning, allowing them to better manage the entire course and flag students who need additional help.

For example, a student experiencing difficulty while studying for a biology course can query Watson, which is embedded in the Pearson courseware. Watson has already read the Pearson courseware content and is ready to spot patterns and generate insights.  Serving as a digital resource, Watson will assess the student’s responses to guide them with hints, feedback, explanations and help identify common misconceptions, working with the student at their pace to help them master the topic.

 

 

ibm-watson-2016

 

 

Udacity partners with IBM Watson to launch the AI Nanodegree — from venturebeat.com by Paul Sawers

Excerpt:

Online education platform Udacity has partnered with IBM Watson to launch a new artificial intelligence (AI) Nanodegree program.

Costing $1,600 for the full two-term, 26-week course, the AI Nanodegree covers a myriad of topics including logic and planning, probabilistic inference, game-playing / search, computer vision, cognitive systems, and natural language processing (NLP). It’s worth noting here that Udacity already offers an Intro to Artificial Intelligence (free) course and the Machine Learning Engineer Nanodegree, but with the A.I. Nanodegree program IBM Watson is seeking to help give developers a “foundational understanding of artificial intelligence,” while also helping graduates identify job opportunities in the space.

 

 

The Future Cognitive Workforce Part 1: Announcing the AI Nanodegree with Udacity — from ibm.com by Rob High

Excerpt:

As artificial intelligence (AI) begins to power more technology across industries, it’s been truly exciting to see what our community of developers can create with Watson. Developers are inspiring us to advance the technology that is transforming society, and they are the reason why such a wide variety of businesses are bringing cognitive solutions to market.

With AI becoming more ubiquitous in the technology we use every day, developers need to continue to sharpen their cognitive computing skills. They are seeking ways to gain a competitive edge in a workforce that increasingly needs professionals who understand how to build AI solutions.

It is for this reason that today at World of Watson in Las Vegas we announced with Udacity the introduction of a Nanodegree program that incorporates expertise from IBM Watson and covers the basics of artificial intelligence. The “AI Nanodegree” program will be helpful for those looking to establish a foundational understanding of artificial intelligence. IBM will also help aid graduates of this program with identifying job opportunities.

 

 

The Future Cognitive Workforce Part 2: Teaching the Next Generation of Builders — from ibm.com by Steve Abrams

Excerpt:

Announced today at World of Watson, and as Rob High outlined in the first post in this series, IBM has partnered with Udacity to develop a nanodegree in artificial intelligence. Rob discussed IBM’s commitment to empowering developers to learn more about cognitive computing and equipping them with the educational resources they need to build their careers in AI.

To continue on this commitment, I’m excited to announce another new program today geared at college students that we’ve launched with Kivuto Solutions, an academic software distributor. Via Kivuto’s popular digital resource management platform, students and academics around the world will now gain free access to the complete IBM Bluemix Portfolio — and specifically, Watson. This offers students and faculty at any accredited university – as well as community colleges and high schools with STEM programs – an easy way to tap into Watson services. Through this access, teachers will also gain a better means to create curriculum around subjects like AI.

 

 

 

IBM introduces new Watson solutions for professions — from finance.yahoo.com

Excerpt:

LAS VEGAS, Oct. 25, 2016 /PRNewswire/ — IBM (NYSE:IBM) today unveiled a series of new cognitive solutions intended for professionals in marketing, commerce, supply chain and human resources. With these new offerings, IBM is enabling organizations across all industries and of all sizes to integrate new cognitive capabilities into their businesses.

Watson solutions learn in an expert way, which is critical for professionals that want to uncover insights hidden in their massive amounts of data to understand, reason and learn about their customers and important business processes. Helping professionals augment their existing knowledge and experience without needing to engage a data analyst empowers them to make more informed business decisions, spot opportunities and take action with confidence.

“IBM is bringing Watson cognitive capabilities to millions of professionals around the world, putting a trusted advisor and personal analyst at their fingertips,” said Harriet Green, general manager Watson IoT, Cognitive Engagement & Education. “Similar to the value that Watson has brought to the world of healthcare, cognitive capabilities will be extended to professionals in new areas, helping them harness the value of the data being generated in their industries and use it in new ways.”

 

 

 

IBM says new Watson Data Platform will ‘bring machine learning to the masses’ — from techrepublic.com by Hope Reese
On Tuesday, IBM unveiled a cloud-based AI engine to help businesses harness machine learning. It aims to give everyone, from CEOs to developers, a simple platform to interpret and collaborate on data.

Excerpt:

“Insight is the new currency for success,” said Bob Picciano, senior vice president at IBM Analytics. “And Watson is the supercharger for the insight economy.”

Picciano, speaking at the World of Watson conference in Las Vegas on Tuesday, unveiled IBM’s Watson Data Platform, touted as the “world’s fastest data ingestion engine and machine learning as a service.”

The cloud-based Watson Data Platform, will “illuminate dark data,” said Picciano, and will “change everything—absolutely everything—for everyone.”

 

 

 

See the #IBMWoW hashtag on Twitter for more news/announcements coming from IBM this week:

 

ibm-wow-hashtag-oct2016

 

 

 

 

Previous postings from earlier this month:

 

  • IBM launches industry first Cognitive-IoT ‘Collaboratory’ for clients and partners
    Excerpt:
    IBM have unveiled an €180 million investment in a new global headquarters to house its Watson Internet of Things business.  Located in Munich, the facility will promote new IoT capabilities around Blockchain and security as well as supporting the array of clients that are driving real outcomes by using Watson IoT technologies, drawing insights from billions of sensors embedded in machines, cars, drones, ball bearings, pieces of equipment and even hospitals. As part of a global investment designed to bring Watson cognitive computing to IoT, IBM has allocated more than $200 million USD to its global Watson IoT headquarters in Munich. The investment, one of the company’s largest ever in Europe, is in response to escalating demand from customers who are looking to transform their operations using a combination of IoT and Artificial Intelligence technologies. Currently IBM has 6,000 clients globally who are tapping Watson IoT solutions and services, up from 4,000 just 8 months ago.

 

 

cognitiveapproachhr-oct2016

 

 

 

 

 

These VR apps are designed to replace your office and daily commute — from uploadvr.com by David Matthews

Excerpt:

Eric Florenzano is a VR consultant and game designer who lives in the San Francisco Bay area. He is currently working on new game ideas with a small team spread out across the US.

So far, so normal, right?. But what you don’t know is that Florenzano is one of a handful of advocates pioneering something they claim could transform work, end commuting, and even lead to a mass exodus from large cities: the virtual office.

“There’s no physical office [for us.] It’s all virtual. That’s the crazy thing,” explains Florenzano. Rather than meeting in person or arranging a conference call, his team jumps into Bigscreen, which allows users, who are represented by floating heads and controllers, to share their monitors in virtual rooms.

 

uploadvrimage-oct2016

 

Also see:

 

bigscreen_rocket_league

 

 

How to train thousands of surgeons at the same time in virtual reality — from singularity.com by Sveta McShane

Excerpt:

Recently, I wrote about how the future of surgery is going to be robotic, data-driven and artificially intelligent.

Although it’s approaching fast, that future is still in the works. In the meantime, there is a real need to train surgeons in a more scalable way, according to Dr. Shafi Ahmed, a surgeon at the Royal London and St. Bartholomew’s hospitals and cofounder of Medical Realities, a company developing a new virtual reality platform for surgical training.

In April of 2016, he live-streamed a cancer surgery in virtual reality. The procedure, a low-risk removal of a colon tumor in a man in his 70s, was filmed in 360 video and streamed live across the world. The high-def 4K camera captured the doctors’ every movement, and those watching could see everything that was happening in immersive detail.

 

 

Duke neurosurgeons test Hololens as an AR assist on tricky procedures — from techcrunch.com by Devin Coldewey,

Excerpt:

“Since we can manipulate a hologram without actually touching anything, we have access to everything we need without breaking a sterile field. In the end, this is actually an improvement over the current OR system because the image is directly overlaid on the patient, without having to look to computer screens for aid,” said Cutler in a Duke news release.

 

 

OTOY Enables Groundbreaking VR Social Features — from uploadvr.com

Excerpt:

Oculus and OTOY may have achieved a breakthrough in social VR functionality.

VR headset owners should soon be able to share a variety of environments and Web-based content with one another in virtual reality. For example, friends can feel like they are together on the bridge of the Enterprise, and on the viewscreen of the ship they see a list of Star Trek episodes to watch with one another.

We have yet to test all of this functionality first-hand, but we’ve seen some of it live in the Gear VR — accessing, for example, a Star Trek environment inside OTOY’s ORBX Media Player app from within the Oculus Social Beta.

 

 

 

 

VR just got a lot more stylish with the Dlodlo V1 Glasses — from seriouswonder.com by B.J. Murphy

 

dlodlovr-glasses-oct2016

 

 

Microsoft CEO says mixed reality is the ‘ultimate computer’ — from engadget.com by Nicole Lee
The company’s goal is to “invent new computers and new computing.”

Excerpt:

“Whether it be HoloLens, mixed reality, or Surface, our goal is to invent new computers and new computing,” he added. This also includes investing in artificial intelligence, which is now its own group within the company.

Nadella admitted that for a long time, Microsoft was complacent. “Early success is probably the worst thing that can happen in life,” he said. But now, he wants Microsoft to be more of a “learn-it-all” culture rather than a “know-it-all” culture.

 

 

A Chinese Lens on Augmented, Virtual and Mixed Reality — from adage.com by David Berkowitz

Excerpt:

These networks keep growing. One of the hosts of the conference, ARinChina, brought me over along with a group of about a half-dozen Westerners. This media company connects a community of 60,000 developers, all of whom are invested in staying ahead of breakthrough technologies like virtual reality (VR), augmented reality (AR) and the hybrid known as mixed reality (MR). The AR track where I presented was hosted by RAVV, a new technology think tank that is pulling together subject matter experts across robotics, artificial intelligence, autonomous vehicles, VR and AR. RAVV is building an international ecosystem that includes its own approaches for startup incubation, knowledge sharing and other collaborative endeavors.

To get a sense of how global the emerging mixed reality field is, consider that, in February, China’s e-commerce giant Alibaba led the $800 million Series C round for Florida-based Magic Leap, an MR startup. As our daily reality becomes more virtual and augmented, it doesn’t matter where someone is on the map. This field is connecting far-flung practitioners, hinting at a time, soon, when AR, VR and MR will connect people in ways never before possible.

 

 


Addendum 10/25/16:

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 
 

From DSC:
Here’s an idea that came to my mind the other day as I was walking by a person who was trying to put some books back onto the shelves within our library.

 

danielchristian-books-sensors-m2m-oct2016

 

 

From DSC:
Perhaps this idea is not very timely…as many collections of books will likely continue to be digitized and made available electronically. But preservation is still a goal for many libraries out there.

 

 

Also see:

IoT and the Campus of Things — from er.educause.edu by

Excerpt:

Today, the IoT sits at the peak of Gartner’s Hype Cycle. It’s probably not surprising that industry is abuzz with the promise of streaming sensor data. The oft quoted “50 billion connected devices by 2020!” has become a rallying cry for technology analysts, chip vendors, network providers, and other proponents of a deeply connected, communicating world. What is surprising is that academia has been relatively slow to join the parade, particularly when the potential impacts are so exciting. Like most organizations that manage significant facilities, universities stand to benefit by adopting the IoT as part of their management strategy. The IoT also affords new opportunities to improve the customer experience. For universities, this means the ability to provide new student services and improve on those already offered. Perhaps most surprisingly, the IoT represents an opportunity to better engage a diverse student base in computer science and engineering, and to amplify these programs through meaningful interdisciplinary collaboration.

The potential benefits of the IoT to the academic community extend beyond facilities management to improving our students’ experience. The lowest hanging fruit can be harvested by adapting some of the smart city applications that have emerged. What student hasn’t shown up late to class after circling the parking lot looking for a space? Ask any student at a major university if it would improve their campus experience to be able to check on their smart phones which parking spots were available. The answer will be a resounding “yes!” and there’s nothing futuristic about it. IoT parking management systems are commercially available through a number of vendors. This same type of technology can be adapted to enable students to find open meeting rooms, computer facilities, or café seating. What might be really exciting for students living in campus dormitories: A guarantee that they’ll never walk down three flights of stairs balancing two loads of dirty laundry to find that none of the washing machines are available. On many campuses, the washing machines are already network-connected to support electronic payment; availability reporting is a straightforward extension.

 

 

Also see:

2016 Innovators Awards | A Location-Aware App for Exploring the Library — from campustechnology.com by Meg Lloyd
To help users access rich information resources on campus, the University of Oklahoma Libraries created a mobile app with location-based navigation and “hyperlocal” content.

Category: Education Futurists

Institution: University of Oklahoma

Project: OU Libraries NavApp

Project lead: Matt Cook, emerging technologies librarian

Tech lineup: Aruba, Meridian, RFIP

 

 

Somewhat related:

 

 

 

 

From DSC:
Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)

The educational benefits — as well as the business/profit-related benefits will certainly be significant!

For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices. (Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)

 


Some use cases for such an app:


Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!

They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.

 

girl
Above image via shutterstock.com

 

horticulturalapp-danielchristian

 

In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc.  in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.

 

horticulturalapp2-danielchristian

 

Or let’s look at the potential uses of this type of app from some different angles.

Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have any Eastern Poison Ivy in it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivy for you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).

 

easternpoisonivy

 

 

Or consider another use of such an app:

  • A homeowner who wants to get rid of a certain kind of weed.  The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
  • Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.

 

Or consider another use of such an app:

  • A homeowner has a diseased tree, and they want to know what to do about it. The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
  • Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.

 

Or consider other/similar apps along these lines:

  • Skin ML (for detecting any issues re: acme, skin cancers, etc.)
  • Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
  • Fish ML
  • Etc.

fish-ml-gettyimages

Image from gettyimages.com

 

So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.

 


*  From Wikipedia:

Horticulture involves nine areas of study, which can be grouped into two broad sections: ornamentals and edibles:

  1. Arboriculture is the study of, and the selection, plant, care, and removal of, individual trees, shrubs, vines, and other perennial woody plants.
  2. Turf management includes all aspects of the production and maintenance of turf grass for sports, leisure use or amenity use.
  3. Floriculture includes the production and marketing of floral crops.
  4. Landscape horticulture includes the production, marketing and maintenance of landscape plants.
  5. Olericulture includes the production and marketing of vegetables.
  6. Pomology includes the production and marketing of pome fruits.
  7. Viticulture includes the production and marketing of grapes.
  8. Oenology includes all aspects of wine and winemaking.
  9. Postharvest physiology involves maintaining the quality of and preventing the spoilage of plants and animals.

 

 

 

 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

Amazon is winning the race to the future — from bizjournals.com by

Excerpt:

This is the week when artificially intelligent assistants start getting serious.

On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.

But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.

The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.

 

amazon-newdot-oct2016

 

 

100 bot people to watch #BotWatch #1 — from chatbotsmagazine.com

Excerpt:

100 people to watch in the bot space, in no order.

I’ll publish a new list once a month. This one is #1 October 2016.

This is my personal top 100 for people to watch in the bot space.

 

 

Should We Give Chatbots Their Own Personalities? — from re-work.com by Sophie Curtis

Excerpt:

Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.

Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.

Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.

 

 

 

How I built and launched my first chatbot in hours — from chatbotsmagazine.com by Max Pelzner
From idea to MVB (Minimum Viable Bot), and launched in 24 hours!

 

 

 

Developing a Chatbot? Do Not Make These Mistakes! — from chatbotsmagazine.com Hira Saeed

 

 

 

This is what an A.I.-powered future looks like — from venturebeat.com by Grayson Brulte

Excerpt:

Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.

A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.

Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.

 

 

Artificial intelligence created to fold laundry for you — from geek.com by Matthew Humphries

Excerpt:

So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.

Laundry folding is actually a five stage process, including:

Grabbing
Spreading
Recognizing
Folding
Sorting/Storing

The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.

 

 

 

 

 

 

2 days of global chatbot experts at Talkabot in 12 minutes — from chatbotsmagazine.com by Alec Lazarescu

Excerpt:

During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.

As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.

In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.

 

 

 


Addendum:


 

alexaprize-2016

The Alexa Prize (emphasis DSC)

The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.

Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.

 

 

 

vrinclassroom-usnews-oct2016

 

Virtual Reality in the Classroom — from usnews.com by Charles Sahm
Using virtual reality as an educational tool could transform the American high school experience.

Excerpt:

Listening to Andrew describe the potential of virtual reality tools to improve education is thrilling. He talks about the evolution of a student reading about France in a textbook, to watching a YouTube video about France, to, via virtual reality, being able to walk the streets of Paris. He imagines students not only being able to read about the Constitutional Convention, but to actually be in “the room where it happens.” (Andrew, like many, is enamored of the musical “Hamilton.”)

Andrew acknowledges, however, that virtual reality as an educational tool is still in the very early stages. Washington Leadership Academy intends to develop a number of programs and then share them with other schools. It is exciting to consider what could be accomplished if the power of virtual reality were harnessed for education rather than gaming; if developers turned their resources away from creating games that teach children how to steal cars and kill people and toward allowing them to explore history, science, art and other subjects in innovative new ways.

 

 

 

 

Google welcomes the future of mobile VR with its $79 Daydream View VR headset — from techcrunch.com by Lucas Matney

Excerpt:

Today at its October hardware/software/everything event, the company showed off its latest VR initiatives including a Daydream headset. The $79 Daydream View VR headset looks quite a bit different than other headsets on the market with its fabric exterior.

Clay Bavor, head of VR, said the design is meant to be more comfortable and friendly. It’s unclear whether the cloth aesthetic is a recommendation for the headset reference design as Xiaomi’s Daydream headset is similarly soft and decidedly design-centric.

The headset and the Google Daydream platform will launch in November.

 

 

 

 

Here’s the Google Pixel — from techcrunch.com by Brian Heater

Excerpt:

While the event is positioned as hardware first, this is Google we’re talking about here, and as such, the real focus is software. The company led the event with talk about its forthcoming Google Assistant AI, and as such, the Pixel will be the first handset to ship with the friendly voice helper. As the company puts it, “we’re building hardware with the Google Assistant it its core. ”

 

 

 

 

 

 

 

Google Home will go on sale today for $129, shipping November 4 — from techcrunch.com by Frederic Lardinois

Excerpt:

Google Home, the company’s answer to Amazon’s Echo, made its official debut at the Google I/O developer conference earlier this year. Since then, we’ve heard very little about Google’s voice-activated personal assistant. Today, at Google’s annual hardware event, the company finally provided us with more details.

Google Home will cost $129 (with a free six-month trial of YouTube red) and go on sale on Google’s online store today. It will ship on November 4.

Google’s Mario Queiroz today argued that our homes are different from other environments. So like the Echo, Google Home combines a wireless speaker with a set of microphones that listen for your voice commands. There is a mute button on the Home and four LEDs on top of the device so you know when it’s listening to you; otherwise, you won’t find any other physical buttons on it.

 

 

 

 

Google Working with Netflix, HBO & Hulu for Daydream Content— from vrfocus.com by Kevin Joyce
#madebygoogle reveals services ready and on the way to support Google Daydream

Excerpt:

Google’s #madebygoogle press conference today revealed some significant details about the company’s forthcoming plans for virtual reality (VR). Daydream is set to launch later this year, and along with the reveal of the first ‘Daydream Ready’ smartphone handset, Pixel, and Google’s own version of the head-mounted display (HMD), Daydream View, the company revealed some of the partners that will be bringing content to the device.

 

 

Google officially unveils $649 Pixel phone with unlimited storage; $129 Google Home — from cnbc.com by Anita Balakrishnan

 

 

 

Google Unveils ‘Home,’ Embraces Aggressive Shift To Hardware — from forbes.com by Matt Drange

Excerpt:

You can add to the seemingly never-ending list of things that Google is deeply involved in: hardware production.

On Tuesday, Google made clear that hardware is more than just a side business, aggressively expanding its offerings across a number of different categories. Headlined by the much-anticipated Google Home and a lineup of smartphones, dubbed Pixel, the announcements mark a major shift in Google’s approach to supplementing its massively profitable advertising sales business and extensive history in software development.

Aimed squarely at Amazon’s Echo, Home is powered by more than 70 billion facts collected by Google’s knowledge graph, the company says. By saying, “OK, Google” Home quickly pulls information from other websites, such as Wikipedia, and gives contextualized answers akin to searching Google manually and clicking on a couple links. Of course, Home is integrated with Google’s other devices, so adding items to your shopping list, for example, are easily pulled up via Pixel. Home can also be programmed to read back information in your calendar, traffic updates and the weather. “If the president can get a daily briefing, why shouldn’t you?” Google’s Rishi Chandra asked when he introduced Home on Tuesday.

 

 

 

 


A comment from DC:
More and more, people are speaking to a device and expect that device to do something for them. How much longer, especially with the advent of chatbots, before people expect this of learning-related applications?

Natural language processing, cognitive computing, and artificial intelligence continue their march forward.


 

Addendums:

 

trojanhorse4ai-googleoct2016

 

 

googleassistanteverywhere-oct2016

 

 

 

9 Best Augmented Reality Smart Glasses 2016 — from appcessories.co.uk

Excerpt:

2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.

Some of them are already out while others are in development.

 

 

The holy grail of Virtual Reality: A complete suspension of disbelief — from labster.com by Marian Reed

Excerpt:

It’s no secret that we here at Labster are pretty excited about VR.  However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.

 

 

 

 

Computer science researchers create augmented reality education tool — from ucalgary.ca by Erin Guiltenane

Excerpt (emphasis DSC):

Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.

 

holocell-sept2016

 

 

 

Upload, Google, HTC and Udacity join forces for new VR education program — from  uploadvr.com

Excerpt:

Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.

According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.

The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.

 

 

 

Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse
Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities

Excerpt:

German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.

Also related to this:

Auschwitz war criminals targeted with help of virtual reality — from jpost.com by

Excerpt:

German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.

Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.

 

 

 

How the UN thinks virtual reality could not only build empathy, but catalyze change, too — from yahoo.com by Lulu Chang

Excerpt:

Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.

Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films.  First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.

 

 

 

Occipital Wants to Turn iPhones into Mixed Virtual Reality Headsets — from next.reality.news by Adam Dachis

Excerpt:

If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.

 

occipital-10-2-16

 

 

‘The Body VR’ Brings Educational Tour Of The Human Body To HTC Vive Today — from uploadvr.com by Jamie Feltham on October 3rd, 2016

 Excerpt:

The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.

The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.

 

 

 

 

Virtual Reality Dazzles Harvard University — from universityherald.com

Excerpt:

For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.

All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.

Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”

“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.

 

 


 

Addendum on 10/6/16:

 

 

 

partnershiponai-sept2016

 

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

 

GOALS

Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.

Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.

Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 
© 2025 | Daniel Christian