Amazon Opening Store That Will Eliminate Checkout — and Lines — from bloomberg.com by Jing Cao
At Amazon Seattle location items get charged to Prime account | New technology combines artificial intelligence and sensors

Excerpt:

Amazon.com Inc. unveiled technology that will let shoppers grab groceries without having to scan and pay for them — in one stroke eliminating the checkout line.

The company is testing the new system at what it’s calling an Amazon Go store in Seattle, which will open to the public early next year. Customers will be able to scan their phones at the entrance using a new Amazon Go mobile app. Then the technology will track what items they pick up or even return to the shelves and add them to a virtual shopping cart in real time, according a video Amazon posted on YouTube. Once the customers exit the store, they’ll be charged on their Amazon account automatically.

 

 

 

Amazon Introduces ‘Amazon Go’ Retail Stores, No Checkout, No Lines — from investors.com

Excerpt:

Online retail king Amazon.com (AMZN) is taking dead aim at the physical-store world Monday, introducing Amazon Go, a retail convenience store format it is developing that will use computer vision and deep-learning algorithms to let shoppers just pick up what they want and exit the store without any checkout procedure.

Shoppers will merely need to tap the Amazon Go app on their smartphones, and their virtual shopping carts will automatically tabulate what they owe, and deduct that amount from their Amazon accounts, sending you a receipt. It’s what the company has deemed “just walk out technology,” which it said is based on the same technology used in self-driving cars. It’s certain to up the ante in the company’s competition with Wal-Mart (WMT), Target (TGT) and the other retail leaders.

 

 

Google DeepMind Makes AI Training Platform Publicly Available — from bloomberg.com by Jeremy Kahn
Company is increasingly embracing open-source initiatives | Move comes after rival Musk’s OpenAI made its robot gym public

Excerpt:

Alphabet Inc.’s artificial intelligence division Google DeepMind is making the maze-like game platform it uses for many of its experiments available to other researchers and the general public.

DeepMind is putting the entire source code for its training environment — which it previously called Labyrinth and has now renamed as DeepMind Lab — on the open-source depository GitHub, the company said Monday. Anyone will be able to download the code and customize it to help train their own artificial intelligence systems. They will also be able to create new game levels for DeepMind Lab and upload these to GitHub.

 

Related:
Alphabet DeepMind is inviting developers into the digital world where its AI learns to explore — from qz.com by Dave Gershgorn

 

 

 

After Retail Stumble, Beacons Shine From Banks to Sports Arenas — from bloomberg.com by Olga Kharif
Shipments of the devices expected to grow to 500 million

Excerpt (emphasis DSC):

Beacon technology, which was practically left for dead after failing to deliver on its promise to revolutionize the retail industry, is making a comeback.

Beacons are puck-size gadgets that can send helpful tips, coupons and other information to people’s smartphones through Bluetooth. They’re now being used in everything from bank branches and sports arenas to resorts, airports and fast-food restaurants. In the latest sign of the resurgence, Mobile Majority, an advertising startup, said on Monday that it was buying Gimbal Inc., a beacon maker it bills as the largest independent source of location data other than Google and Apple Inc.

Several recent developments have sparked the latest boom. Companies like Google parent Alphabet Inc. are making it possible for people to use the feature without downloading any apps, which had been a major barrier to adoption, said Patrick Connolly, an analyst at ABI. Introduced this year, Google Nearby Notifications lets developers tie an app or a website to a beacon to send messages to consumers even when they have no app installed.

But in June, Cupertino, California-based Mist Systems began shipping a software-based product that simplified the process. Instead of placing 10 beacons on walls and ceilings, for example, management using Mist can install one device every 2,000 feet (610 meters), then designate various points on a digital floor plan as virtual beacons, which can be moved with a click of a mouse.

 

 

Google’s Hand-Fed AI Now Gives Answers, Not Just Search Results — from wired.com by Cade Metz

Excerpt:

Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.

“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”

That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.

Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.

 

 

Deep Learning in Production at Facebook — from re-work.co by Katie Pollitt

Excerpt:

Facebook is powered by machine learning and AI. From advertising relevance, news feed and search ranking to computer vision, face recognition, and speech recognition, they run ML models at massive scale, computing trillions of predictions every day.

At the 2016 Deep Learning Summit in Boston, Andrew Tulloch, Research Engineer at Facebook, talked about some of the tools and tricks Facebook use for scaling both the training and deployment of some of their deep learning models at Facebook. He also covered some useful libraries that they’d open-sourced for production-oriented deep learning applications. Tulloch’s session can be watched in full below.

 

 

The Artificial Intelligence Gold Rush — from foresightr.com by Mark Vickers
Big companies, venture capital firms and governments are all banking on AI

Excerpt:

Let’s start with some of the brand-name organizations laying down big bucks on artificial intelligence.

  • Amazon: Sells the successful Echo home speaker, which comes with the personal assistant Alexa.
  • Alphabet (Google): Uses deep learning technology to power Internet searches and developed AlphaGo, an AI that beat the world champion in the game of Go.
  • Apple: Developed the popular virtual assistant Siri and is working on other phone-related AI applications, such as facial recognition.
  • Baidu: Wants to use AI to improve search, recognize images of objects and respond to natural language queries.
  • Boeing: Works with Carnegie Mellon University to develop machine learning capable of helping it design and build planes more efficiently.
  • Facebook: Wants to create the “best AI lab in the world.” Has its personal assistant, M, and focuses heavily on facial recognition.
    IBM: Created the Jeopardy-winning Watson AI and is leveraging its data analysis and natural language capabilities in the healthcare industry.
  • Intel: Has made acquisitions to help it build specialized chips and software to handle deep learning.
  • Microsoft: Works on chatbot technology and acquired SwiftKey, which predicts what users will type next.
  • Nokia: Has introduced various machine learning capabilities to its portfolio of customer-experience software.
    Nvidia: Builds computer chips customized for deep learning.
  • Salesforce: Took first place at the Stanford Question Answering Dataset, a test of machine learning and comprehension, and has developed the Einstein model that learns from data.
  • Shell: Launched a virtual assistant to answer customer questions.
  • Tesla Motors: Continues to work on self-driving automobile technologies.
  • Twitter: Created an AI-development team called Cortex and acquired several AI startups.

 

 

 

IBM Watson and Education in the Cognitive Era — from i-programmer.info by Nikos Vaggalis

Excerpt:

IBM’s seemingly ubiquitous Watson is now infiltrating education, through AI powered software that ‘reads’ the needs of individual  students in order to engage them through tailored learning approaches.

This is not to be taken lightly, as it opens the door to a new breed of technologies that will spearhead the education or re-education of the workforce of the future.

As outlined in the 2030 report, despite robots or AI displacing a big chunk of the workforce, they will also play a major role in creating job opportunities as never before.In such a competitive landscape, workers of all kinds, white or blue collar to begin with, should come readied with new, versatile and contemporary skills.

The point is, the very AI that will leave someone jobless, will also help him to re-adapt into a new job’s requirements.It will also prepare the new generations through the use of such optimal methodologies that will once more give meaning to the aging  and counter-productive schooling system which has the  students’ skills disengaged from the needs of the industry and which still segregates students into ‘good’ and ‘bad’. Might it be that ‘bad’ students become just like that due to the system’s inability to stimulate their interest?

 

 

 

 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Some brief reflections from DSC:

will likely be used by colleges, universities, bootcamps, MOOCs, and others to feed web-based learner profiles, which will then be queried by people and/or organizations who are looking for freelancers and/or employees to fill their project and/or job-related needs.

As of the end of 2016, Microsoft — with their purchase of LinkedIn — is strongly positioned as being a major player in this new landscape. But it might turn out to be an open-sourced solution/database.

Data mining, algorithm development, and Artificial Intelligence (AI) will likely have roles to play here as well. The systems will likely be able to tell us where we need to grow our skillsets, and provide us with modules/courses to take. This is where the Learning from the Living [Class] Room vision becomes highly relevant, on a global scale. We will be forced to continually improve our skillsets as long as we are in the workforce. Lifelong learning is now a must. AI-based recommendation engines should be helpful here — as they will be able to analyze the needs, trends, developments, etc. and present us with some possible choices (based on our learner profiles, interests, and passions).

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 

From DSC:
Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)

The educational benefits — as well as the business/profit-related benefits will certainly be significant!

For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices. (Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)

 


Some use cases for such an app:


Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!

They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.

 

girl
Above image via shutterstock.com

 

horticulturalapp-danielchristian

 

In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc.  in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.

 

horticulturalapp2-danielchristian

 

Or let’s look at the potential uses of this type of app from some different angles.

Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have any Eastern Poison Ivy in it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivy for you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).

 

easternpoisonivy

 

 

Or consider another use of such an app:

  • A homeowner who wants to get rid of a certain kind of weed.  The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
  • Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.

 

Or consider another use of such an app:

  • A homeowner has a diseased tree, and they want to know what to do about it. The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
  • Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.

 

Or consider other/similar apps along these lines:

  • Skin ML (for detecting any issues re: acme, skin cancers, etc.)
  • Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
  • Fish ML
  • Etc.

fish-ml-gettyimages

Image from gettyimages.com

 

So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.

 


*  From Wikipedia:

Horticulture involves nine areas of study, which can be grouped into two broad sections: ornamentals and edibles:

  1. Arboriculture is the study of, and the selection, plant, care, and removal of, individual trees, shrubs, vines, and other perennial woody plants.
  2. Turf management includes all aspects of the production and maintenance of turf grass for sports, leisure use or amenity use.
  3. Floriculture includes the production and marketing of floral crops.
  4. Landscape horticulture includes the production, marketing and maintenance of landscape plants.
  5. Olericulture includes the production and marketing of vegetables.
  6. Pomology includes the production and marketing of pome fruits.
  7. Viticulture includes the production and marketing of grapes.
  8. Oenology includes all aspects of wine and winemaking.
  9. Postharvest physiology involves maintaining the quality of and preventing the spoilage of plants and animals.

 

 

 

 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 

IBM Foundation collaborates with AFT and education leaders to use Watson to help teachers — from finance.yahoo.com

Excerpt:

ARMONK, N.Y., Sept. 28, 2016 /PRNewswire/ — Teachers will have access to a new, first-of-its-kind, free tool using IBM’s innovative Watson cognitive technology that has been trained by teachers and designed to strengthen teachers’ instruction and improve student achievement, the IBM Foundation and the American Federation of Teachers announced today.

Hundreds of elementary school teachers across the United States are piloting Teacher Advisor with Watson – an innovative tool by the IBM Foundation that provides teachers with a complete, personalized online resource. Teacher Advisor enables teachers to deepen their knowledge of key math concepts, access high-quality vetted math lessons and acclaimed teaching strategies and gives teachers the unique ability to tailor those lessons to meet their individual classroom needs.

Litow said there are plans to make Teacher Advisor available to all elementary school teachers across the U.S. before the end of the year.

 

 

In this first phase, Teacher Advisor offers hundreds of high-quality vetted lesson plans, instructional resources, and teaching techniques, which are customized to meet the needs of individual teachers and the particular needs of their students.

 

 

Also see:

teacheradvisor-sept282016

 

Educators can also access high-quality videos on teaching techniques to master key skills and bring a lesson or teaching strategy to life into their classroom.

 

 

From DSC:
Today’s announcement involved personalization and giving customized directions, and it caused my mind to go in a slightly different direction. (IBM, Google, Microsoft, Apple, Amazon, and others like Smart Sparrow are likely also thinking about this type of direction as well. Perhaps they’re already there…I’m not sure.)

But given the advancements in machine learning/cognitive computing (where example applications include optical character recognition (OCR) and computer vision), how much longer will it be before software is able to remotely or locally “see” what a third grader wrote down for a given math problem (via character and symbol recognition) and “see” what the student’s answer was while checking over the student’s work…if the answer was incorrect, the algorithms will likely know where the student went wrong.  The software will be able to ascertain what the student did wrong and then show them how the problem should be solved (either via hints or by showing the entire problem to the student — per the teacher’s instructions/admin settings). Perhaps, via natural language processing, this process could be verbalized as well.

Further questions/thoughts/reflections then came to my mind:

  • Will we have bots that teachers can use to teach different subjects? (“Watson may even ask the teacher additional questions to refine its response, honing in on what the teacher needs to address certain challenges.)
  • Will we have bots that students can use to get the basics of a given subject/topic/equation?
  • Will instructional designers — and/or trainers in the corporate world — need to modify their skillsets to develop these types of bots?
  • Will teachers — as well as schools of education in universities and colleges — need to modify their toolboxes and their knowledgebases to take advantage of these sorts of developments?
  • How might the corporate world take advantage of these trends and technologies?
  • Will MOOCs begin to incorporate these sorts of technologies to aid in personalized learning?
  • What sorts of delivery mechanisms could be involved? Will we be tapping into learning-related bots from our living rooms or via our smartphones?

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Also see:

 

 

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

The first truly awesome chatbot is a talking T. Rex — from fastcodesign.com by John Brownlee
National Geographic uses a virtual Tyrannosaur to teach kids about dinosaurs—and succeeds where other chatbots fail.

 

 

Excerpt:

As some have declared chatbots to be the “next webpage,” brands have scrambled to develop their own talkative bots, letting you do everything from order a pizza to rewrite your resume. The truth is, though, that a lot of these chatbots are actually quite stupid, and tend to have a hard time understanding natural human language. Sooner or later, users get frustrated bashing their heads up against the wall of a dim-witted bot’s AI.

So how do you design around a chatbot’s walnut-sized brain? If you’re National Geographic Kids UK, you set your chatbot to the task of pretending to be a Tyrannosaurus rex, a Cretaceous-era apex predator that really had a walnut-sized brain (at least comparatively speaking).

 

She’s called Tina the T. rex, and by making it fun to learn about dinosaurs, she suggests that education — rather than advertising or shopping — might be the real calling of chatbots.

 

 

 

Also relevant/see:

Honeybot-August2016

 

Why every college campus needs a chatbot — from venturebeat.com by John Brandon

Excerpts:

Dropping a child off at college is a stressful experience. I should know — I dropped off one last week and another today. It’s confusing because everything is so new, your child (who is actually a young adult, how did that happen?) is anxious, and you usually have to settle up on your finances.

This situation happens to be ideal for a chatbot, because the administrative staff is way too busy to handle questions in person or by phone. There might be someone directing you in the parking lot, but not everyone standing around in the student center knows how to submit FAFSA data.

One of the main reasons for thinking of this is that I would have used one myself today. It’s a situation where you want immediate, quick information without having to explain all of the background information. You just need the campus map or the schedule for the day — that’s it. You don’t want any extra frills.

 

 

From DSC:
My question is:

Will Instructional Designers, Technical Communicators, e-Learning Designers, Trainers, (and other positions as as well) going to have to know how to build chatbots in the future? Our job descriptions could be changing soon. Or will this kind of thing require more programming-related skills? Perhaps more firms like the one below could impact that situation…

 

 

Chatfuel-Aug2016

 

 

 

 

The next battleground: The 4th Era of Personal Computing — from stevebrownfuturist.com by Steve Brown

Excerpt:

I believe we are moving into the fourth era of personal computing. The first era was characterized by the emergence of the PC. The second by the web and the browser, and the third by mobile and apps.

 

The fourth personal computing platform will be a combination of IOT, wearable and AR-based clients using speech and gesture, connected over 4G/5G networks to PA, CaaS and social networking platforms that draw upon a new class of cloud-based AI to deliver highly personalized access to information and services.

 

 

So what does the fourth era of personal computing look like? It’s a world of smart objects, smart spaces, voice control, augmented reality, and artificial intelligence.

 

 

 

 

IBM made a ‘crash course’ for the White House, and it’ll teach you all the AI basics — from futurism.com by Ramon Perez

Summary:

With the current AI revolution, comes a flock of skeptics. Alarmed of what AI could be in the near future, the White House released a Notice of Request For Information (RFI) on it. In response, IBM has created what seems to be an AI 101, giving a good sense of the current state, future, and risks of AI.

 

 

Also see:

 

FedGovt-Request4Info-June2016

 

 

 
© 2024 | Daniel Christian