Introducing Deep Learning and Neural Networks — Deep Learning for Rookies — from medium.com by Nahua Kang

Excerpts:

Here’s a short list of general tasks that deep learning can perform in real situations:

  1. Identify faces (or more generally image categorization)
  2. Read handwritten digits and texts
  3. Recognize speech (no more transcribing interviews yourself)
  4. Translate languages
  5. Play computer games
  6. Control self-driving cars (and other types of robots)

And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!

Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):

  1. Michael Nielsen’s Neural Networks and Deep Learning
  2. Geoffrey Hinton’s Neural Networks for Machine Learning
  3. Goodfellow, Bengio, & Courville’s Deep Learning
  4. Ian Trask’s Grokking Deep Learning,
  5. Francois Chollet’s Deep Learning with Python
  6. Udacity’s Deep Learning Nanodegree (not free but high quality)
  7. Udemy’s Deep Learning A-Z ($10–$15)
  8. Stanford’s CS231n and CS224n
  9. Siraj Raval’s YouTube channel

The list goes on and on. David Venturi has a post for freeCodeCamp that lists many more resources. Check it out here.

 

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 
 

Australian start-up taps IBM Watson to launch language translation earpiece — from prnewswire.com
World’s first available independent translation earpiece, powered by AI to be in the hands of consumers by July

Excerpts:

SYDNEY, June 12, 2017 /PRNewswire/ — Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds, being the first of its kind to hit global markets next month.

Unveiled at last week’s United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland, the Translate One2One earpiece supports translations across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German and Chinese. Available to purchase today for delivery in July, the earpiece carries a price tag of $179 USD, and is the first independent translation device that doesn’t rely on Bluetooth or Wi-Fi connectivity.

 

Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds.

 

 

From DSC:
How much longer before this sort of technology gets integrated into videoconferencing and transcription tools that are used in online-based courses — enabling global learning at a scale never seen before? (Or perhaps NLP-based tools are already being integrated into global MOOCs and the like…not sure.) It would surely allow for us to learn from each other in a variety of societies throughout the globe.

 

 

 

From DSC:
In reviewing the item below, I wondered:

How should students — as well as Career Services Groups/Departments within institutions of higher education — respond to the growing use of artificial intelligence (AI) in peoples’ job searches?

My take on it? Each student needs to have a solid online-based footprint — such as offering one’s own streams of content via a WordPress-based blog, one’s Twitter account, and one’s LinkedIn account. That is, each student has to be out there digitally, not just physically. (Though I suspect having face-to-face conversations and interactions will always be an incredibly powerful means of obtaining jobs as well. But if this trend picks up steam, one’s online-based footprint becomes all the more important to finding work.)

 




How AI is changing your job hunt
 — from by Jennifer Alsever

Excerpt (emphasis DSC):

The solution appeared in the form of artificial intelligence software from a young company called Interviewed. It speeds the vetting process by providing online simulations of what applicants might do on their first day as an employee. The software does much more than grade multiple-choice questions. It can capture not only so-called book knowledge but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using “please” and “thank you,” for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” claims Freedman, whose company now employs 45 people. “Forty-eight hours later, we’ve hired someone.” It’s not perfect, he says, but it’s faster and better than the human way.

It isn’t just startups using such software; corporate behemoths are implementing it too. Artificial intelligence has come to hiring.

Predictive algorithms and machine learning are fast emerging as tools to identify the best candidates.

 

 



Addendum on 6/7/17:

 

 

 



Addendum on 6/15/17:

  • Want a job? It may be time to have a chat with a bot — from sfchronicle.com by Nicholas Cheng
    Excerpt:
    “The future is AI-based recruitment,” Mya CEO Eyal Grayevsky said. Candidates who were being interviewed through a chat couldn’t tell that they were talking to a bot, he added — even though the company isn’t trying to pass its bot off as human.

    A 2015 study by the National Bureau of Economic Research surveyed 300,000 people and found that those who were hired by a machine, using algorithms to match them to a job, stayed in their jobs 15 percent longer than those who were hired by human recruiters.

    A report by the McKinsey Global Institute estimates that more than half of human resources jobs may be lost to automation, though it did not give a time period for that shift.

    “Recruiting jobs will definitely go away,” said John Sullivan, who teaches management at San Francisco State University.

 

 

From DSC:
There are now more than 12,000+ skills on Amazon’s new platform — Alexa.  I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?


 

From this Alexa Skills Kit page:

What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.

What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

You can build and host most skills for free using Amazon Web Services (AWS).

 

 

 


 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 

Microsoft Cortana-Powered Speaker Challenges Amazon’s Echo With Skype Calls — from foxbusiness.com by y Jay Greene

Excerpt:

Microsoft Corp. is hoping to challenge Amazon.com Inc.’s Echo smart speaker for a spot on the kitchen counter with a device from Samsung Electronics Co. that can make phone calls. The Invoke, which will debut this fall, comes more two years after the release of the Echo, which has sold more 11 million units through late last year, according to estimates by Morgan Stanley. It also will compete with Alphabet Inc.’s Google Home, which was released last fall. The voice-controlled Invoke, made by Samsung’s Harman Kardon unit, will use Microsoft’s Cortana digital assistant to take commands.

 

 

Microsoft Screams ‘Me Too’ With Cortana-Powered Rival to Amazon Echo and Google Home — from gizmodo.com by Alex Cranz

Excerpt:

With Microsoft’s Build developer conference just two days away, the company has revealed one of the most anticipated announcements from the event: A new Cortana-powered speaker made by German audio giant Harman Kardon.

Now, it’s fair to see this speaker for what it is: An answer to the Google Home and Amazon Echo. Both assistant-powered speakers are already in homes across our great nation, listening to your noises, noting your habits, and in general invading your lives under the guise of smart home helpfulness. The new Microsoft speaker, dubbed “Invoke,” one will presumably do the good stuff, let giving you updates on the weather and letting you turn on some soothing jazz for your dog with just a spoken command. Microsoft is also hoping that partnering with Harmon Kardon means its speaker can avoid one of the bigger problems with these devices—their tendency to sound cheap and tinny.

 

 

 

 

Harman Kardon’s Invoke speaker is a Cortana-powered take on an Amazon Echo — from theverge.com by Chaim Gartenberg

Excerpt:

As teased earlier, the Invoke speaker will offer 360-degree speakers, Skype calling, and smart home control all through voice commands. Design-wise, the Invoke strongly resembles Amazon’s Echo that its meant to compete with: both offer a similar cylindrical aluminum shape, light ring, and a seven-microphone array. That said, Harmon Kardon seems to be taking the “speaker” portion of its functionality more seriously than Amazon does, with the Invoke offering three woofers and three tweeters (compared to the Echo, which offers just a single of each driver). Microsoft is also highlighting the Invoke’s ability to make and receive Skype calls to other Skype devices as well as cellphones and landlines, which is an interesting addition to a home assistant.

 

 

From DSC:
Here we see yet another example of the increasing use of voice as a means of communicating with our computing-related devices. AI-based applications continue to develop.

 

 

 

 

 

Google Home’s assistant can now recognize different voices — from cnbc.com

Excerpt:

SAN FRANCISCO (AP) — Google’s voice-activated assistant can now recognize who’s talking to it on Google’s Home speaker.

An update released Thursday enables Home’s built-in assistant to learn the different voices of up to six people, although they can’t all be talking to the internet-connected speaker at the same time.

Distinguishing voices will allow Home to be more personal in some of its responses, depending on who triggers the assistant with the phrase, “OK Google” or “Hey Google.”

For instance, once Home is trained to recognize a user named Joe, the assistant will automatically be able to tell him what traffic is like on his commute, list events on his daily calendar or even play his favorite songs. Then another user named Jane could get similar information from Home, but customized for her.

 

 

 

 

 

 

The Best Amazon Alexa Skills — from in.pcmag.com by Eric Griffith

Example skills:

 

WebMD

 

 

5 Alexa skills to try this week — from venturebeat.com by Khari Johnson

Excerpt:

Below are five noteworthy Amazon Alexa skills worth trying, chosen from New, Most Enabled Skills, Food and Drink, and Customer Favorites categories in the Alexa Skills Marketplace.

 

From DSC:
I’d like to see how the Verse of the Day skill performs.

 

 

 


Also see:


 

 


From DSC:
This topic reminds me of a slide from
my NGLS 2017 Conference presentation:

 

 


 

 
© 2016 Learning Ecosystems