From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

learningthemes2016-elliottmasie

Learning Themes
Curated Content from Learning 2016
Open Source eBook – No Cost
http://www.masie.com/eBookL16

 

From an email from Elliott Masie and the Masie Center:

This 35-page eBook is packed with content, context, conversations, video links, and curated resources that include:

  • Learning Perspectives from Anderson Cooper, Scott Kelly, Tiffany Shlain, George Takei, Richard Culatta, Karl Kapp, Nancy DeViney, and other Learning 2016 Keynotes
  • Graphic Illustrations from Deirdre Crowley, Crowley & Co.
  • Video Links for Content Segments
  • Learning Perspectives from Elliott Masie
  • Segments focusing on:
    • Brain & Cognitive Science
    • Gamification & Gaming
    • Micro-Learning
    • Visual Storytelling
    • Connected & Flipped Classrooms
    • Compliance & Learning
    • Engagement in Virtual Learning
    • Video & Learning
    • Virtual Reality & Learning
  • And much more!

We have created this as an open source, shareable resource that will extend the learning from Learning 2016 to our colleagues around the world. We are using the Open Creative Commons license, so feel free to share!

We believe that CURATION, focusing on extending and organizing follow-up content, is a growing and critical dimension of any learning event. We hope that you find your eBook of value!

 

 

 

Explosive IoT growth could produce skills shortage — from rtinsights.com by Joe McKendrick

Excerpts:

CIO’s Sharon Florentine took a look at data from global freelance marketplace Upwork, based on annual job posting growth and skills demand. The following are leading IoT skills Florentine identified that will be demand as the IoT proliferates, with level the growth seen over a one-year period:

Circuit design (231% growth): Builds miniaturized circuit boards for sensors and devices.

Microcontroller programming (225% growth): Writes code that provides intelligence to microcontrollers, the embedded chips within IoT devices.

AutoCAD (216% growth): Designs the devices.

Machine learning (199% growth): Writes the algorithms that recognize data patterns within devices.

Security infrastructure (194% growth): Identifies and integrates the standards, protocols and technologies that protect devices, as well as the data inside.

Big data (183% growth): Data scientists and engineers “who can collect, organize, analyze and architect disparate sources of data.” Hadoop and Apache Spark are two areas with particularly strong demand.

 

Some brief reflections from DSC:

will likely be used by colleges, universities, bootcamps, MOOCs, and others to feed web-based learner profiles, which will then be queried by people and/or organizations who are looking for freelancers and/or employees to fill their project and/or job-related needs.

As of the end of 2016, Microsoft — with their purchase of LinkedIn — is strongly positioned as being a major player in this new landscape. But it might turn out to be an open-sourced solution/database.

Data mining, algorithm development, and Artificial Intelligence (AI) will likely have roles to play here as well. The systems will likely be able to tell us where we need to grow our skillsets, and provide us with modules/courses to take. This is where the Learning from the Living [Class] Room vision becomes highly relevant, on a global scale. We will be forced to continually improve our skillsets as long as we are in the workforce. Lifelong learning is now a must. AI-based recommendation engines should be helpful here — as they will be able to analyze the needs, trends, developments, etc. and present us with some possible choices (based on our learner profiles, interests, and passions).

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 

Coppell ISD becomes first district to use IBM, Apple format — from bizjournals.com by Shawn Shinneman

Excerpt:

Teachers at Coppell Independent School District have become the first to use a new IBM and Apple technology platform built to aid personalized learning.

IBM Watson Element for Educators pairs IBM analytics and data tools such as cognitive computing with Apple design. It integrates student grades, interests, participation, and trends to help educators determine how a student learns best, the company says.

It also recommends learning content personalized to each student. The platform might suggest a reading assignment on astronomy for a young student who has shown an interest in space.

 

From DSC:
Technologies involved with systems like IBM’s Watson will likely bring some serious impact to the worlds of education and training & development. Such systems — and the affordances that they should be able to offer us — should not be underestimated.  The potential for powerful, customized, personalized learning could easily become a reality in K-20 as well as in the corporate training space. This is an area to keep an eye on for sure, especially with the growing influence of cognitive computing and artificial intelligence.

These kinds of technology should prove helpful in suggesting modules and courses (i.e., digital learning playlists), but I think the more powerful systems will be able to drill down far more minutely than that. I think these types of systems will be able to assist with all kinds of math problems and equations as well as analyze writing examples, correct language mispronunciations, and more (perhaps this is already here…apologies if so). In other words, the systems will “learn” where students can go wrong doing a certain kind of math equation…and then suggest steps to correct things when the system spots a mistake (or provide hints at how to correct mistakes).

This road takes us down to places where we have:

  • Web-based learner profiles — including learner’s preferences, passions, interests, skills
  • Microlearning/badging/credentialing — likely using blockchain
  • Learning agents/bots to “contact” for assistance
  • Guidance for lifelong learning
  • More choice, more control

 

ibmwatson-oct2016

 

 

Also see:

  • First IBM Watson Education App for iPad Delivers Personalized Learning for K-12 Teachers and Students — from prnewswire.com
    Educators at Coppell Independent School District in Texas first to use new iPad app to tailor learning experiences to student’s interests and aptitudes
    Excerpts:
    With increasing demands on educators, teachers need tools that will enable them to better identify the individual needs of all students while designing learning experiences that engage and hold the students’ interest as they master the content. This is especially critical given that approximately one third of American students require remedial education when they enter college today, and current college attainment rates are not keeping pace with the country’s projected workforce needs1.  A view of academic and day-to-day updates in real time can help teachers provide personalized support when students need it.

    IBM Watson Element provides teachers with a holistic view of each student through a fun, easy-to-use and intuitive mobile experience that is a natural extension of their work. Teachers can get to know their students beyond their academic performance, including information about personal interests and important milestones students choose to share.  For example, teachers can input notes when a student’s highly anticipated soccer match is scheduled, when another has just been named president for the school’s World Affairs club, and when another has recently excelled following a science project that sparked a renewed interest in chemistry.The unique “spotlight” feature in Watson Element provides advanced analytics that enables deeper levels of communication between teachers about their students’ accomplishments and progress. For example, if a student is excelling academically, teachers can spotlight that student, praising their accomplishments across the school district. Or, if a student received a top award in the district art show, a teacher can spotlight the student so their other teachers know about it.
 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

9 Best Augmented Reality Smart Glasses 2016 — from appcessories.co.uk

Excerpt:

2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.

Some of them are already out while others are in development.

 

 

The holy grail of Virtual Reality: A complete suspension of disbelief — from labster.com by Marian Reed

Excerpt:

It’s no secret that we here at Labster are pretty excited about VR.  However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.

 

 

 

 

Computer science researchers create augmented reality education tool — from ucalgary.ca by Erin Guiltenane

Excerpt (emphasis DSC):

Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.

 

holocell-sept2016

 

 

 

Upload, Google, HTC and Udacity join forces for new VR education program — from  uploadvr.com

Excerpt:

Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.

According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.

The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.

 

 

 

Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse
Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities

Excerpt:

German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.

Also related to this:

Auschwitz war criminals targeted with help of virtual reality — from jpost.com by

Excerpt:

German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.

Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.

 

 

 

How the UN thinks virtual reality could not only build empathy, but catalyze change, too — from yahoo.com by Lulu Chang

Excerpt:

Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.

Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films.  First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.

 

 

 

Occipital Wants to Turn iPhones into Mixed Virtual Reality Headsets — from next.reality.news by Adam Dachis

Excerpt:

If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.

 

occipital-10-2-16

 

 

‘The Body VR’ Brings Educational Tour Of The Human Body To HTC Vive Today — from uploadvr.com by Jamie Feltham on October 3rd, 2016

 Excerpt:

The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.

The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.

 

 

 

 

Virtual Reality Dazzles Harvard University — from universityherald.com

Excerpt:

For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.

All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.

Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”

“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.

 

 


 

Addendum on 10/6/16:

 

 

 

Top 200 Tools for Learning 2016: Overview — from c4lpt.co.uk by Jane Hart

Also see Jane’s:

  1. TOP 100 TOOLS FOR PERSONAL & PROFESSIONAL LEARNING (for formal/informal learning and personal productivity)
  2. TOP 100 TOOLS FOR WORKPLACE LEARNING (for training, e-learning, performance support and social collaboration
  3. TOP 100 TOOLS FOR EDUCATION (for use in primary and secondary (K12) schools, colleges, universities and adult education.)

 

top200tools-2016-jane-hart

 

Also see Jane’s “Best of Breed 2016” where she breaks things down into:

  1. Instructional tools
  2. Content development tools
  3. Social tools
  4. Personal tools

 

 

 

 

From chatbots to Einstein, artificial intelligence as a service — from infoworld.com by Yves de Montcheuil

Excerpt:

The recent announcement of Salesforce Einstein — dubbed “artificial intelligence for everyone” — sheds new light on the new and pervasive usage of artificial intelligence in every aspect of businesses.

 

Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customized for every single customer, and it will learn, self-tune, and get smarter with every interaction and additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.

 


Chatbots, or conversational bots, are the “other” trending topic in the field of artificial intelligence. At the juncture of consumer and business, they provide the ability for an AI-based system to interact with users through a headless interface. It does not matter whether a messaging app is used, or a speech-to-text system, or even another app — the chatbot is front-end agnostic.

Since the user does not have the ability to provide context around the discussion, he just asks questions in natural language to an AI-driven backend that is tasked with figuring this context and looking for the right answer.

 

 

IBM is launching a much-awaited ‘Watson’ recruiting tool — from eremedia.com by Todd Raphael

Excerpt:

For many months IBM has gone to recruiting-industry conferences to say that the famous Watson will be at some point used for talent-acquisition, but that it hasn’t happened quite yet.

It’s here.

IBM is first using Watson for its RPO customers, and then rolling it out as a product for the larger community, perhaps next spring. One of my IBM contacts, Recruitment Innovation Global Leader Yates Baker, tells me that the current version is a work in progress like the first iPhone (or perhaps like that Siri-for-recruiting tool).

There are three parts: recruiting, marketing, and sourcing.

 

watsonrecruitingtool-sept2016

 

 

Apple’s Siri: A Lot Smarter, but Still Kind of Dumb — from wsj.com by Joanna Stern
With the new MacOS and Apple’s AirPods, Siri’s more powerful than ever, but still not as good as some competitors

Excerpt:

With the new iOS 10, Siri can control third-party apps, like Uber and WhatsApp. With the release of MacOS Sierra on Tuesday, Siri finally lands on the desktop, where it can take care of basic operating system tasks, send emails and more. With WatchOS 3 and the new Apple Watch, Siri is finally faster on the wrist. And with Apple’s Q-tip-looking AirPods arriving in October, Siri can whisper sweet nothings in your inner ear with unprecedented wireless freedom. Think Joaquin Phoenix’s earpiece in the movie “Her.”

The groundwork is laid for an AI assistant to stake a major claim in your life, and finally save you time by doing menial tasks. But the smarter Siri becomes in some places, the dumber it seems in others—specifically compared with Google’s and Amazon’s voice assistants. If I hear “I’m sorry, Joanna, I’m afraid I can’t answer that” one more time…

 

 

 

IBM Research and MIT Collaborate to Advance Frontiers of Artificial Intelligence in Real-World Audio-Visual Comprehension Technologies — from prnewswire.com
Cross-disciplinary research approach will use insights from brain and cognitive science to advance machine understanding

Excerpt:

YORKTOWN HEIGHTS, N.Y., Sept. 20, 2016 /PRNewswire/ — IBM Research (NYSE: IBM) today announced a multi-year collaboration with the Department of Brain & Cognitive Sciences at MIT to advance the scientific field of machine vision, a core aspect of artificial intelligence. The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C) goal will be to develop cognitive computing systems that emulate the human ability to understand and integrate inputs from multiple sources of audio and visual information into a detailed computer representation of the world that can be used in a variety of computer applications in industries such as healthcare, education, and entertainment.

The BM3C will address technical challenges around both pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish. For instance, humans watching a short video of a real-world event can easily recognize and produce a verbal description of what happened in the clip as well as assess and predict the likelihood of a variety of subsequent events, but for a machine, this ability is currently impossible.

 

 

Satya Nadella on Microsoft’s new age of intelligence — from fastcompany.com by Harry McCracken
How the software giant aims to tie everything from Cortana to Office to HoloLens to Azure servers into one AI experience.

Excerpt:

“Microsoft was born to do a certain set of things. We’re about empowering people in organizations all over the world to achieve more. In today’s world, we want to use AI to achieve that.”

That’s Microsoft CEO Satya Nadella, crisply explaining the company’s artificial-intelligence vision to me this afternoon shortly after he hosted a keynote at Microsoft’s Ignite conference for IT pros in Atlanta. But even if Microsoft only pursues AI opportunities that it considers to be core to its mission, it has a remarkably broad tapestry to work with. And the examples that were part of the keynote made that clear.

 

 

 

 

IBM Foundation collaborates with AFT and education leaders to use Watson to help teachers — from finance.yahoo.com

Excerpt:

ARMONK, N.Y., Sept. 28, 2016 /PRNewswire/ — Teachers will have access to a new, first-of-its-kind, free tool using IBM’s innovative Watson cognitive technology that has been trained by teachers and designed to strengthen teachers’ instruction and improve student achievement, the IBM Foundation and the American Federation of Teachers announced today.

Hundreds of elementary school teachers across the United States are piloting Teacher Advisor with Watson – an innovative tool by the IBM Foundation that provides teachers with a complete, personalized online resource. Teacher Advisor enables teachers to deepen their knowledge of key math concepts, access high-quality vetted math lessons and acclaimed teaching strategies and gives teachers the unique ability to tailor those lessons to meet their individual classroom needs.

Litow said there are plans to make Teacher Advisor available to all elementary school teachers across the U.S. before the end of the year.

 

 

In this first phase, Teacher Advisor offers hundreds of high-quality vetted lesson plans, instructional resources, and teaching techniques, which are customized to meet the needs of individual teachers and the particular needs of their students.

 

 

Also see:

teacheradvisor-sept282016

 

Educators can also access high-quality videos on teaching techniques to master key skills and bring a lesson or teaching strategy to life into their classroom.

 

 

From DSC:
Today’s announcement involved personalization and giving customized directions, and it caused my mind to go in a slightly different direction. (IBM, Google, Microsoft, Apple, Amazon, and others like Smart Sparrow are likely also thinking about this type of direction as well. Perhaps they’re already there…I’m not sure.)

But given the advancements in machine learning/cognitive computing (where example applications include optical character recognition (OCR) and computer vision), how much longer will it be before software is able to remotely or locally “see” what a third grader wrote down for a given math problem (via character and symbol recognition) and “see” what the student’s answer was while checking over the student’s work…if the answer was incorrect, the algorithms will likely know where the student went wrong.  The software will be able to ascertain what the student did wrong and then show them how the problem should be solved (either via hints or by showing the entire problem to the student — per the teacher’s instructions/admin settings). Perhaps, via natural language processing, this process could be verbalized as well.

Further questions/thoughts/reflections then came to my mind:

  • Will we have bots that teachers can use to teach different subjects? (“Watson may even ask the teacher additional questions to refine its response, honing in on what the teacher needs to address certain challenges.)
  • Will we have bots that students can use to get the basics of a given subject/topic/equation?
  • Will instructional designers — and/or trainers in the corporate world — need to modify their skillsets to develop these types of bots?
  • Will teachers — as well as schools of education in universities and colleges — need to modify their toolboxes and their knowledgebases to take advantage of these sorts of developments?
  • How might the corporate world take advantage of these trends and technologies?
  • Will MOOCs begin to incorporate these sorts of technologies to aid in personalized learning?
  • What sorts of delivery mechanisms could be involved? Will we be tapping into learning-related bots from our living rooms or via our smartphones?

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Also see:

 

 

 

LinkedIn announced several things yesterday (9/22/16). Below are some links to these announcements:


Introducing LinkedIn Learning, a Better Way to Develop Skills and Talent — from learning.linkedin.com

Excerpt (emphasis DSC):

Today, we are thrilled to announce the launch of LinkedIn Learning, an online learning platform enabling individuals and organizations to achieve their objectives and aspirations. Our goal is to help people discover and develop the skills they need through a personalized, data-driven learning experience.

LinkedIn Learning combines the industry-leading content from Lynda.com with LinkedIn’s professional data and network. With more than 450 million member profiles and billions of engagements, we have a unique view of how jobs, industries, organizations and skills evolve over time. From this, we can identify the skills you need and deliver expert-led courses to help you obtain those skills. We’re taking the guesswork out of learning.

The pressure on individuals and organizations to adapt to change has never been greater. The skills that got you to where you are today are not the skills to prepare you for tomorrow. In fact, the shelf-life of skills is less than five years, and many of today’s fastest growing job categories didn’t even exist five years ago.

To tackle these challenges, LinkedIn Learning is built on three core pillars:

Data-driven personalization: We get the right course in front of you at the right time. Using the intelligence that comes with our network, LinkedIn Learning creates personalized recommendations, so learners can efficiently discover which courses are most relevant to their goals or job function. Organizations can use LinkedIn insights to customize multi-course Learning Paths to meet their specific needs. We also provide robust analytics and reporting to help you measure learning effectiveness.

 

linkedinlearning-announced-9-22-16

 

 

LinkedIn’s first big move since the $26.2 billion Microsoft acquisition is basically a ‘school’ for getting a better job — from finance.yahoo.com

Excerpt:

Today, LinkedIn has launched LinkedIn Learning — its first major product launch since the news last June that Microsoft would be snapping up the social network for $26.2 billion in a deal that has yet to close.

LinkedIn Learning takes the online skills training classes the company got in its 2015 acquisition of Lynda.com for $1.5 billion.

The idea, says LinkedIn CEO Jeff Weiner, is to help its 433 million-plus members get the skills they need to stay relevant in a world that’s increasingly reliant on digital skills.

 

 

 

LinkedIn’s New Learning Platform to Recommend Lynda Courses for Professionals — from edsurge.com by Marguerite McNeal

Excerpt:

Companies will also be able to create their own “learning paths”—bundles of courses around a particular topic—to train employees. A chief learning officer, for instance, might compile a package of courses in product management and ask 10 employees to complete the assignments over the course of a few months.

LinkedIn is also targeting higher-education institutions with the new offering. It is marketing the solution as a professional development tool that can help faculty learn how to use classroom tools such as Moodle, Adobe Captivate and learning management systems.

 

“Increasingly predictions of tech displacing workers are coming to fruition,” he added. “The idea that you can study a skill once and have a job for the rest of your life—those days are over.”

 

 

 

LinkedIn Learning for higher education

 

 

 

Accelerating LinkedIn’s Vision Through Innovation — from slideshare.net

linkeinlearning-sept2016

 

linkeinlearning2-sept2016

 

 

LinkedIn adding new training features, news feeds and ‘bots’ — from finance.yahoo.com

Excerpt:

LinkedIn is also adding more personalized features to its news feed, where members can see articles and announcements posted by their professional contacts. A new “Interest Feed” will offer a collection of articles, posts and opinion pieces on major news events or current issues.

 

 

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 
© 2025 | Daniel Christian