IBM to Train 25 Million Africans for Free to Build Workforce — from by Loni Prinsloo
* Tech giant seeking to bring, keep digital jobs in Africa
* Africa to have world’s largest workforce by 2040, IBM projects

Excerpt:

International Business Machines Corp. is ramping up its digital-skills training program to accommodate as many as 25 million Africans in the next five years, looking toward building a future workforce on the continent. The U.S. tech giant plans to make an initial investment of 945 million rand ($70 million) to roll out the training initiative in South Africa…

 

Also see:

IBM Unveils IT Learning Platform for African Youth — from investopedia.com by Tim Brugger

Excerpt (emphasis DSC):

Responding to concerns that artificial intelligence (A.I.) in the workplace will lead to companies laying off employees and shrinking their work forces, IBM (NYSE: IBM) CEO Ginni Rometty said in an interview with CNBC last month that A.I. wouldn’t replace humans, but rather open the door to “new collar” employment opportunities.

IBM describes new collar jobs as “careers that do not always require a four-year college degree but rather sought-after skills in cybersecurity, data science, artificial intelligence, cloud, and much more.”

In keeping with IBM’s promise to devote time and resources to preparing tomorrow’s new collar workers for those careers, it has announced a new “Digital-Nation Africa” initiative. IBM has committed $70 million to its cloud-based learning platform that will provide free skills development to as many as 25 million young people in Africa over the next five years.

The platform will include online learning opportunities for everything from basic IT skills to advanced training in social engagement, digital privacy, and cyber protection. IBM added that its A.I. computing wonder Watson will be used to analyze data from the online platform, adapt it, and help direct students to appropriate courses, as well as refine the curriculum to better suit specific needs.

 

 

From DSC:
That last part, about Watson being used to personalize learning and direct students to appropropriate courses, is one of the elements that I see in the Learning from the Living [Class]Room vision that I’ve been pulse-checking for the last several years. AI/cognitive computing will most assuredly be a part of our learning ecosystems in the future.  Amazon is currently building their own platform that adds 100 skills each day — and has 1000 people working on creating skills for Alexa.  This type of thing isn’t going away any time soon. Rather, I’d say that we haven’t seen anything yet!

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

And Amazon has doubled down to develop Alexa’s “skills,” which are discrete voice-based applications that allow the system to carry out specific tasks (like ordering pizza for example). At launch, Alexa had just 20 skills, which has reportedly jumped to 5,200 today with the company adding about 100 skills per day.

In fact, Bezos has said, “We’ve been working behind the scenes for the last four years, we have more than 1,000 people working on Alexa and the Echo ecosystem … It’s just the tip of the iceberg. Just last week, it launched a new website to help brands and developers create more skills for Alexa.

Source

 

 

Also see:

 

“We are trying to make education more personalised and cognitive through this partnership by creating a technology-driven personalised learning and tutoring,” Lula Mohanty, Vice President, Services at IBM, told ET. IBM will also use its cognitive technology platform, IBM Watson, as part of the partnership.

“We will use the IBM Watson data cloud as part of the deal, and access Watson education insight services, Watson library, student information insights — these are big data sets that have been created through collaboration and inputs with many universities. On top of this, we apply big data analytics,” Mohanty added.

Source

 

 


 

Also see:

  • Most People in Education are Just Looking for Faster Horses, But the Automobile is Coming — from etale.org by Bernard Bull
    Excerpt:
    Most people in education are looking for faster horses. It is too challenging, troubling, or beyond people’s sense of what is possible to really imagine a completely different way in which education happens in the world. That doesn’t mean, however, that the educational equivalent of the automobile is not on its way. I am confident that it is very much on its way. It might even arrive earlier than even the futurists expect. Consider the following prediction.

 


 

 

 

Your Next Personal Robot Could Be Professor Einstein

 

 

 

From DSC:
By the way, I’m not posting this to suggest that professors/teachers/trainers/etc. are going away due to AI-based technologies.  Humans like to learn with other humans (and we are decades away from a general AI anyway).

That said, I do think there’s a place for technologies to be used as beneficial tools. In this case, such an AI-backed robot could help with some of the heavy lifting of learning about a new subject or topic. This interesting piece — currently out at Kickstarter — is a good example of the combination of a variety of technologies such as AI/speech recognition/natural language processing (NLP), robotics, and other technologies.

Notice that you can download more interactive apps from the cloud with Professor Einstein. In other words, this is like a platform. (Along these lines…developers gave Alexa 4000 new skills last quarterAmazon is creating a platform as well.)

Bottom line: AI needs to be on our radars.

 

 

 

Excerpt from Amazon fumbles earnings amidst high expectations (emphasis DSC):

Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.

 

 

 

 

 

Alexa got 4,000 new skills in just the last quarter!

From DSC:
What are the teaching & learning ramifications of this?

By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.

 

SEEK is using artificial intelligence to find your next job — from afr.com by Max Mason

Excerpt:

Employment services business SEEK has begun using machine learning and artificial intelligence to send its users more relevant job advertisements and alerts.

Across its network, which includes Australia, Brazil, Malaysia, Mexico and China among others, SEEK’s machine learning algorithm has made more than 2.5 billion recommendations to jobseekers.

 

From DSC:
With Microsoft investing heavily in AI and with its purchase of LinkedIn (who had already purchased Lynda.com the year before), I’m wondering what Microsoft will be offering along these lines. With AI, #blockchain and other new forms of credentialing, finding work could be very different in the future.

 

 

From DSC:
The following article reminded me of a vision that I’ve had for the last few years…

  • How to Build a Production Studio for Online Courses — from campustechnology.com by Dian Schaffhauser
    At the College of Business at the University of Illinois, video operations don’t come in one size. Here’s how the institution is handling studio setup for MOOCs, online courses, guest speakers and more.

Though I’m a huge fan of online learning, why only build a production studio that’s meant to support online courses only? Let’s take it a step further and design a space that can address the content development for online learning as well as for blended learning — which can include the flipped classroom type of approach.

To do so, colleges and universities need to build something akin to what the National University of Singapore has done. I would like to see institutions create large enough facilities in order to house multiple types of recording studios in each one of them. Each facility would feature:

  • One room that has a lightboard and a mobile whiteboard in it — let the faculty member choose which surface that they want to use

 

 

 

 

 

 

 

  • A recording booth with a nice, powerful, large iMac that has ScreenFlow on it. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.

 

 

 

 

  • Another recording booth with a PC and Adobe Captivate, Camtasia Studio, Screencast-O-Matic, or similar tools. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.

 

 

 

 

  • Another recording booth with an iPad tablet and apps loaded on it such as Explain Everything:

 

 

  • A large recording studio that is similar to what’s described in the article — a room that incorporates a full-width green screen, with video monitors, a tablet, a podium, several cameras, high-end mics and more.  Or, if the budget allows for it, a really high end broadcasting/recording studio like what Harvard Business school is using:

 

 

 

 

 


 

A piece of this facility could look and act like the Sound Lab at the Museum of Pop Culture (MoPOP)

 

 

 


 

 

 

Sydney – The Opera House has joined forces with Samsung to open a new digital lounge that encourages engagement with the space. — from lsnglobal.com by Rhiannon McGregor

 

The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)

 

 

The Lounge, enabled by Samsung on November 8, 2016 in Sydney, Australia. (Photo by Anna Kucera)

 

 

Also see:

The Lounge enabled by Samsung
Open day and night, The Lounge enabled by Samsung is a new place in the heart of the Opera House where people can sit and enjoy art and culture through the latest technology. The most recent in a series of future-facing projects enabled by Sydney Opera House’s Principal Partner, Samsung, the new visitor lounge features stylish, comfortable seating, as well as interactive displays and exclusive digital content, including:

  • The Sails – a virtual-reality experience of what it’s like to stand atop the sails of Australia’s most famous building, brought to you via Samsung Gear VR;
  • Digital artwork – a specially commissioned video exploration of the Opera House and its stories, produced by creative director Sam Doust. The artwork has been themed to match the time of day and is the first deployment of Samsung’s latest Smart LED Display panel technology in Australia; and
  • Google Cultural Institute – available to view on Samsung Galaxy View and Galaxy Tab S2 tablets, the digital collection features 50 online exhibits that tell the story of the Opera House’s past, present and future through rare archival photography, celebrated performances, early architectural drawings and other historical documents, little-known interviews and Street View imagery.

 

 

 

Don’t discount the game-changing power of the morphing “TV” when coupled with AI, NLP, and blockchain-based technologies! [Christian]

From DSC:

Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!

When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?

For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.

Some of the questions that come to my mind:

  • Why can’t there be more educationally-related games and apps available on this type of platform?
  • Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
  • When will potential employers start asking for access to such web-based learner profiles?
  • Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
  • Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
  • Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
  • Will virtual tutoring be one of the available apps/channels?
  • Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)

 

Forget a streaming stick: These 4K TVs come with Amazon Fire TV inside — from techradar.com by Nick Pino

Excerpt:

The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.

 

 

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


Addendums


 

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

.

  • Once thought to be a fad, MOOCs showed staying power in 2016 — from educationdive.com
    Dive Brief:

    • EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
    • The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
    • Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
 

Alexa, Tell Me Where You’re Going Next — from backchannel.com by Steven Levy
Amazon’s VP of Alexa talks about machine learning, chatbots, and whether industry is strip-mining AI talent from academia.

Excerpt:

Today Prasad is giving an Alexa “State of the Union” address at the Amazon Web Services conference in Las Vegas, announcing an improved version of the Alexa Skills Kit, which helps developers create the equivalent of apps for the platform; a beefed-up Alexa Voice Service, which will make it easier to transform third-party devices like refrigerators and cars into Alexa bots; a partnership with Intel; and the Alexa Accelerator that, with the startup incubator Techstars, will run a 13-week program to help newcomers build Alexa skills. Prasad and Amazon haven’t revealed sales numbers, but industry experts have estimated that Amazon has sold over five million Echo devices so far.

Prasad, who joined Amazon in 2013, spent some time with Backchannel before his talk today to illuminate the direction of Alexa and discuss how he’s recruiting for Jeff Bezos’s arsenal without drying up the AI pipeline.

 

 

What DeepMind brings to Alphabet — from economist.com
The AI firm’s main value to Alphabet is as a new kind of algorithm factory

Excerpt:

DeepMind’s horizons stretch far beyond talent capture and public attention, however. Demis Hassabis, its CEO and one of its co-founders, describes the company as a new kind of research organisation, combining the long-term outlook of academia with “the energy and focus of a technology startup”—to say nothing of Alphabet’s cash.

Were he to succeed in creating a general-purpose AI, that would obviously be enormously valuable to Alphabet. It would in effect give the firm a digital employee that could be copied over and over again in service of multiple problems. Yet DeepMind’s research agenda is not—or not yet—the same thing as a business model. And its time frames are extremely long.

 

 

Artificial Intelligence: Silicon Valley’s Next Frontier — from toptechnews.com by Ethan Baron

Excerpt:

Silicon Valley needs its next big thing, a focus for the concentrated brain power and innovation infrastructure that have made this region the world leader in transformative technology. Just as the valley’s mobile era is peaking, the next frontier of growth and innovation has arrived: It’s Siri in an Apple iPhone, Alexa in an Amazon Echo, the software brain in Google’s self-driving cars, Amazon’s product recommendations and, someday, maybe the robot surgeon that saves your life.

It’s artificial intelligence, software that can “learn” and “think,” the latest revolution in tech.

“It’s going to be embedded in everything,” said startup guru Steve Blank, an adjunct professor at Stanford. “We’ve been talking about artificial intelligence for 30 years, maybe longer, in Silicon Valley. It’s only in the last five years, or maybe even the last two years, that this stuff has become useful.”

 

 

 

What Is The Difference Between Artificial Intelligence And Machine Learning? — from forbes.com by Bernard Marr

Excerpt:

Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably. They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.

In short, the best answer is that:
Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”.
And,
Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

 

 

Why we are still light years away from full artificial intelligence — from techcrunch.com by Clara Lu

Excerpt:

Yet, the truth is, we are far from achieving true AI — something that is as reactive, dynamic, self-improving and powerful as human intelligence.

Full AI, or superintelligence, should possess the full range of human cognitive abilities. This includes self-awareness, sentience and consciousness, as these are all features of human cognition.

 

 

Udacity adds 14 hiring partners as AI, VR and self-driving talent wars heat up — from techcrunch.com by Darrell Etherington

Excerpt:

Udacity is positioned perfectly to benefit from the rush on talent in a number of growing areas of interest among tech companies and startups. The online education platform has added 14 new hiring partners across its Artificial Intelligence Engineer, Self-Driving Car Engineer and Virtual Reality Developer Nanodegree programs, as well as in its Predictive Analytics Nanodegree, including standouts like Bosch, Harma, Slack, Intel, Amazon Alexa and Samsung.

That brings the total number of hiring partners for Udacity to over 30, which means a lot of potential soft landings for graduates of its nanodegree programs. The nanodegree offered by Udacity is its own original form of accreditation, which is based on a truncated field of study that spans months, rather than years, and allows students to direct the pace of their own learning. It also all takes place online, so students can potentially learn from anywhere.

 

 

 

 

The Ethics of Artificial Intelligence – from livestream.com

 

 

 

 

The Great A.I. Awakening — from nytimes.com by Gideo Lewis-Kraus
How Google used artificial intelligence to transform Google Translate, one of its more popular services — and how machine learning is poised to reinvent computing itself.

Excerpt:

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.

 

 

 

Microsoft bets on AI — from businessinsider.com

Excerpt:

On [December 12th, 2016], Microsoft announced a new Microsoft Ventures fund dedicated to artificial intelligence (AI) investments, according to TechCrunch. The fund, part of the company’s investment arm that launched in May, will back startups developing AI technology and includes Element AI, a Montreal-based incubator that helps other companies embrace AI. The fund further supports Microsoft’s focus on AI. The company has been steadily announcing major initiatives in support of the technology. For example, in September, it announced a major restructuring and formed a new group dedicated to AI products. And in mid-November, it partnered with OpenAI, an AI research nonprofit backed by Elon Musk, to further its AI research and development efforts.

 

 

The Growth of Artificial Intelligence in E-commerce — from redstagfulfillment.com by Jake Rheude

Excerpt:

Whether Artificial Intelligence (AI) is something you’ve just come across or it’s something you’ve been monitoring for a while, there’s no denying that it’s starting to influence many industries. And one place that it’s really starting to change things is e-commerce. Below you’ll find some interesting stats and facts about how AI is growing in e-commerce and how it’s changing the way we do things. From personalizing the shopping experience for customers to creating personal buying assistants, AI is something retailers can’t ignore. We’ll also take a look at some examples of how leading online stores have used AI to enrich the customer buying experience.

 

 

Will AI built by a ‘sea of dudes’ understand women? AI’s inclusivity problem — from digitaltrends.com by Dyllan Furness

Excerpt:

Only 26 percent of computer professionals were women in 2013, according to a recent review by the American Association of University Women. That figure has dropped 9 percent since 1990.

Explanations abound. Some say the industry is masculine by design. Others claim computer culture is unwelcoming — even hostile — to women. So, while STEM fields like biology, chemistry, and engineering see an increase in diversity, computing does not. Regardless, it’s a serious problem.

Artificial intelligence is still in its infancy, but it’s poised to become the most disruptive technology since the Internet. AI will be everywhere — in your phone, in your fridge, in your Ford. Intelligent algorithms already track your online activity, find your face in Facebook photos, and help you with your finances. Within the next few decades they’ll completely control your car and monitor your heart health. An AI may one day even be your favorite artist.

The programs written today will inform the systems built tomorrow. And if designers all have one worldview, we can expect equally narrow-minded machines.

 

 

 

 

From DSC:
After seeing the sharp interface out at Adobe (see image below), I’ve often thought that there should exist a similar interface and a similar database for educators, trainers, and learners to use — but the database would address a far greater breadth of topics to teach and/or learn about.  You could even select beginner, intermediate, or advanced levels (grade levels might work here as well).

Perhaps this is where artificial intelligence will come in…not sure.

 

 

 

 

From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

Google Earth lets you explore the planet in virtual reality — from vrscout.com by Eric Chevalier

 

 

 

How virtual reality could change the way students experience education — from edtechmagazine.com by  by Andrew Koke and Anthony Guest-Scott
High-impact learning experiences may become the norm, expanding access for all students.

Excerpt:

The headlines for Pokémon GO were initially shocking, but by now they’re familiar: as many as 21 million active daily users, 700,000 downloads per day, $5.7 million in-app purchases per day, $200 million earned as of August. Analysts anticipate the game will garner several billion dollars in ad revenue over the next year. By almost any measure, Pokémon GO is huge.

The technologies behind the game, augmented and virtual reality (AVR), are huge too. Many financial analysts expect the technology to generate $150 billion over the next three years, outpacing even smartphones with unprecedented growth, much of it in entertainment. But AVR is not only about entertainment. In August 2015, Teegan Lexcen was born in Florida with only half a heart and needed surgery. With current cardiac imaging software insufficient to assist with such a delicate operation on an infant, surgeons at Nicklaus Children’s Hospital in Miami turned to 3D imaging software and a $20 Google Cardboard VR set. They used a cellphone to peer into the baby’s heart, saw exactly how to improve her situation and performed the successful surgery in December 2015.

“I could see the whole heart. I could see the chest wall,” Dr. Redmond Burke told Today. “I could see all the things I was worried about in creating an operation.”

 

 

 

Visionary: How 4 institutions are venturing into a new mixed reality — from ecampusnews.com by Laura Devaney
Mixed reality combines virtual and augmented realities for enhanced learning experiences–and institutions are already implementing it.

Excerpt:

Texas Tech University Health Sciences Center in Lubbock and San Diego State University are both part of a Pearson mixed reality pilot aimed at leveraging mixed reality to solve challenges in nursing education.

At Bryn Mawr College, a women’s liberal arts college in Pennsylvania, faculty, students, and staff are exploring various educational applications for the HoloLens mixed reality devices. They are testing Skype for HoloLens to connect students with tutors in Pearson’s 24/7 online tutoring service, Smarthinking.

At Canberra Grammar School in Australia, Pearson is working with teachers in a variety of disciplines to develop holograms for use in their classrooms. The University of Canberra is partnering with Pearson to provide support for the project and evaluate the impact these holograms have on teaching and learning.

 

 

 

ZapBox brings room-scale mixed reality to the masses — from slashgear.com by JC Torres

Excerpt:

As fantastic as technologies like augmented and mixed reality may be, experiencing them, much less creating them, requires a sizable investment, financially speaking. It is just beyond the reach of consumers as well as your garage-type indie developer. AR and VR startup Zappar, however, wants to smash that perception. With ZapBox, you can grab a kit for less than a triple-A video game to start your journey towards mixed reality fun and fame. It’s Magic Leap meets Google Cardboard. Or as Zappar itself says, making Magic Leap, magic cheap!

 

 

 

 

Shakespeare’s Tempest gets mixed reality makeover — from bbc.com by Jane Wakefield

 

intel-flying-whale-at-ces-2014Intel’s flying whale was the inspiration for the technology in The Tempest

 

 

 

eon-reality-education-nov2016

 

 

 

Excerpts from the 9/23/16 School Library Journal Webcast:

vr-in-education-thejournal-sept2016

 

 

 

 

 

ar-vr-elearningguildfall2016

 

Table of Contents

  • Introduction
  • New Technologies: Do They Really Change Learning Strategies? — by Joe Ganci and Sherry Larson
  • Enhanced Realities: An Opportunity to Avoid the Mistakes of the Past — by David Kelly
  • Let the Use Case Drive What Gets Augmented—Not the Other Way Around — by Chad Udell
  • Augmented Reality: An Augmented Perspective — by Alexander Salas
  • Virtual Reality Will Be the Perfect Immersive Learning Environment — by Koreen Pagano
  • Will VR Succeed? Viewpoint from Within a Large Corporation — by John O’Hare
  • Will VR Succeed? Viewpoint from Running a VR Start-up — by Ishai Albert Jacob

 

 

 

From DSC:
I think Technical Communicators have a new pathway to pursue…check out this piece from Scope AR and Caterpillar.

 

scopear-nov2016

 

 

 

Some brief reflections from DSC:

will likely be used by colleges, universities, bootcamps, MOOCs, and others to feed web-based learner profiles, which will then be queried by people and/or organizations who are looking for freelancers and/or employees to fill their project and/or job-related needs.

As of the end of 2016, Microsoft — with their purchase of LinkedIn — is strongly positioned as being a major player in this new landscape. But it might turn out to be an open-sourced solution/database.

Data mining, algorithm development, and Artificial Intelligence (AI) will likely have roles to play here as well. The systems will likely be able to tell us where we need to grow our skillsets, and provide us with modules/courses to take. This is where the Learning from the Living [Class] Room vision becomes highly relevant, on a global scale. We will be forced to continually improve our skillsets as long as we are in the workforce. Lifelong learning is now a must. AI-based recommendation engines should be helpful here — as they will be able to analyze the needs, trends, developments, etc. and present us with some possible choices (based on our learner profiles, interests, and passions).

 

 

Google, Facebook, and Microsoft are remaking themselves around AI — from wired.com by Cade Metz

Excerpt (emphasis DSC):

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

 

But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

 

 

Intelligence everywhere! Gartner’s Top 10 Strategic Technology Trends for 2017 — from which-50.com

Excerpt (emphasis DSC):

AI and Advanced Machine Learning
Artificial intelligence (AI) and advanced machine learning (ML) are composed of many technologies and techniques (e.g., deep learning, neural networks, natural-language processing [NLP]). The more advanced techniques move beyond traditional rule-based algorithms to create systems that understand, learn, predict, adapt and potentially operate autonomously. This is what makes smart machines appear “intelligent.”

“Applied AI and advanced machine learning give rise to a spectrum of intelligent implementations, including physical devices (robots, autonomous vehicles, consumer electronics) as well as apps and services (virtual personal assistants [VPAs], smart advisors), ” said David Cearley, vice president and Gartner Fellow. “These implementations will be delivered as a new class of obviously intelligent apps and things as well as provide embedded intelligence for a wide range of mesh devices and existing software and service solutions.”

 

gartner-toptechtrends-2017

 

 

 

 

aiexperiments-google-nov2016

 

Google’s new website lets you play with its experimental AI projects — from mashable.com by Karissa Bell

Excerpt:

Google is letting users peek into some of its most experimental artificial intelligence projects.

The company unveiled a new website Tuesday called A.I. Experiments that showcases Google’s artificial intelligence research through web apps that anyone can test out. The projects include a game that guesses what you’re drawing, a camera app that recognizes objects you put in front of it and a music app that plays “duets” with you.

 

Google unveils a slew of new and improved machine learning APIs — from digitaltrends.com by Kyle Wiggers

Excerpt:

On Tuesday, Google Cloud chief Diane Greene announced the formation of a new team, the Google Cloud Machine Learning group, that will manage the Mountain View, California-based company’s cloud intelligence efforts going forward.

 

Found in translation: More accurate, fluent sentences in Google Translate — from blog.google by Barak Turovsky

Excerpt:

In 10 years, Google Translate has gone from supporting just a few languages to 103, connecting strangers, reaching across language barriers and even helping people find love. At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation.

Neural Machine Translation has been generating exciting research results for a few years and in September, our researchers announced Google’s version of this technique. At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. And this is all possible because of end-to-end learning system built on Neural Machine Translation, which basically means that the system learns over time to create better, more natural translations.

 

 

‘Augmented Intelligence’ for Higher Ed — from insidehighered.com by Carl Straumsheim
IBM picks Blackboard and Pearson to bring the technology behind the Watson computer to colleges and universities.

Excerpts:

[IBM] is partnering with a small number of hardware and software providers to bring the same technology that won a special edition of the game show back in 2011 to K-12 institutions, colleges and continuing education providers. The partnerships and the products that might emerge from them are still in the planning stage, but the company is investing in the idea that cognitive computing — natural language processing, informational retrieval and other functions similar to the ones performed by the human brain — can help students succeed in and outside the classroom.

Chalapathy Neti, vice president of education innovation at IBM Watson, said education is undergoing the same “digital transformation” seen in the finance and health care sectors, in which more and more content is being delivered digitally.

IBM is steering clear of referring to its technology as “artificial intelligence,” however, as some may interpret it as replacing what humans already do.

“This is about augmenting human intelligence,” Neti said. “We never want to see these data-based systems as primary decision makers, but we want to provide them as decision assistance for a human decision maker that is an expert in conducting that process.”

 

 

What a Visit to an AI-Enabled Hospital Might Look Like — from hbr.org by R “Ray” Wang

Excerpt (emphasis DSC):

The combination of machine learning, deep learning, natural language processing, and cognitive computing will soon change the ways that we interact with our environments. AI-driven smart services will sense what we’re doing, know what our preferences are from our past behavior, and subtly guide us through our daily lives in ways that will feel truly seamless.

Perhaps the best way to explore how such systems might work is by looking at an example: a visit to a hospital.

The AI loop includes seven steps:

  1. Perception describes what’s happening now.
  2. Notification tells you what you asked to know.
  3. Suggestion recommends action.
  4. Automation repeats what you always want.
  5. Prediction informs you of what to expect.
  6. Prevention helps you avoid bad outcomes.
  7. Situational awareness tells you what you need to know right now.

 

 

Japanese artificial intelligence gives up on University of Tokyo admissions exam — from digitaltrends.com by Brad Jones

Excerpt:

Since 2011, Japan’s National Institute of Informatics has been working on an AI, with the end goal of having it pass the entrance exam for the University of Tokyo, according to a report from Engadget. This endeavor, dubbed the Todai Robot Project in reference to a local nickname for the school, has been abandoned.

It turns out that the AI simply cannot meet the exact requirements of the University of Tokyo. The team does not expect to reach their goal of passing the test by March 2022, so the project is being brought to an end.

 

 

“We are building not just Azure to have rich compute capability, but we are, in fact, building the world’s first AI supercomputer,” he said.

— from Microsoft CEO Satya Nadella spruiks power of machine learning,
smart bots and mixed reality at Sydney developers conference

 

Why it’s so hard to create unbiased artificial intelligence — from techcrunch.com by Ben Dickson

Excerpt:

As artificial intelligence and machine learning mature and manifest their potential to take on complicated tasks, we’ve become somewhat expectant that robots can succeed where humans have failed — namely, in putting aside personal biases when making decisions. But as recent cases have shown, like all disruptive technologies, machine learning introduces its own set of unexpected challenges and sometimes yields results that are wrong, unsavory, offensive and not aligned with the moral and ethical standards of human society.

While some of these stories might sound amusing, they do lead us to ponder the implications of a future where robots and artificial intelligence take on more critical responsibilities and will have to be held responsible for the possibly wrong decisions they make.

 

 

 

The Non-Technical Guide to Machine Learning & Artificial Intelligence — from medium.com by Sam DeBrule

Excerpt:

This list is a primer for non-technical people who want to understand what machine learning makes possible.

To develop a deep understanding of the space, reading won’t be enough. You need to: have an understanding of the entire landscape, spot and use ML-enabled products in your daily life (Spotify recommendations), discuss artificial intelligence more regularly, and make friends with people who know more than you do about AI and ML.

News: For starters, I’ve included a link to a weekly artificial intelligence email that Avi Eisenberger and I curate (machinelearnings.co). Start here if you want to develop a better understanding of the space, but don’t have the time to actively hunt for machine learning and artificial intelligence news.

Startups: It’s nice to see what startups are doing, and not only hear about the money they are raising. I’ve included links to the websites and apps of 307+ machine intelligence companies and tools.

People: Here’s a good place to jump into the conversation. I’ve provided links to Twitter accounts (and LinkedIn profiles and personal websites in their absence) of the founders, investors, writers, operators and researchers who work in and around the machine learning space.

Events: If you enjoy getting out from behind your computer, and want to meet awesome people who are interested in artificial intelligence in real life, there is one place that’s best to do that, more on my favorite place below.

 

 

 

How one clothing company blends AI and human expertise — from hbr.org by H. James Wilson, Paul Daugherty, & Prashant Shukla

Excerpt:

When we think about artificial intelligence, we often imagine robots performing tasks on the warehouse or factory floor that were once exclusively the work of people. This conjures up the specter of lost jobs and upheaval for many workers. Yet, it can also seem a bit remote — something that will happen in “the future.” But the future is a lot closer than many realize. It also looks more promising than many have predicted.

Stitch Fix provides a glimpse of how some businesses are already making use of AI-based machine learning to partner with employees for more-effective solutions. A five-year-old online clothing retailer, its success in this area reveals how AI and people can work together, with each side focused on its unique strengths.

 

 

 

 

he-thinkaboutai-washpost-oc2016

 

Excerpt (emphasis DSC):

As the White House report rightly observes, the implications of an AI-suffused world are enormous — especially for the people who work at jobs that soon will be outsourced to artificially-intelligent machines. Although the report predicts that AI ultimately will expand the U.S. economy, it also notes that “Because AI has the potential to eliminate or drive down wages of some jobs … AI-driven automation will increase the wage gap between less-educated and more-educated workers, potentially increasing economic inequality.”

Accordingly, the ability of people to access higher education continuously throughout their working lives will become increasingly important as the AI revolution takes hold. To be sure, college has always helped safeguard people from economic dislocations caused by technological change. But this time is different. First, the quality of AI is improving rapidly. On a widely-used image recognition test, for instance, the best AI result went from a 26 percent error rate in 2011 to a 3.5 percent error rate in 2015 — even better than the 5 percent human error rate.

Moreover, as the administration’s report documents, AI has already found new applications in so-called “knowledge economy” fields, such as medical diagnosis, education and scientific research. Consequently, as artificially intelligent systems come to be used in more white-collar, professional domains, even people who are highly educated by today’s standards may find their livelihoods continuously at risk by an ever-expanding cybernetic workforce.

 

As a result, it’s time to stop thinking of higher education as an experience that people take part in once during their young lives — or even several times as they advance up the professional ladder — and begin thinking of it as a platform for lifelong learning.

 

Colleges and universities need to be doing more to move beyond the array of two-year, four-year, and graduate degrees that most offer, and toward a more customizable system that enables learners to access the learning they need when they need it. This will be critical as more people seek to return to higher education repeatedly during their careers, compelled by the imperative to stay ahead of relentless technological change.

 

 

From DSC:
That last bolded paragraph is why I think the vision of easily accessible learning — using the devices that will likely be found in one’s apartment or home — will be enormously powerful and widespread in a few years. Given the exponential pace of change that we are experiencing — and will likely continue to experience for some time — people will need to reinvent themselves quickly.

Higher education needs to rethink our offerings…or someone else will.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

From DSC:
If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:

For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:

  • Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
  • Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
  • Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
  • Peaces of scripture, with links to Biblegateway.com or other sites
  • Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
  • Etc.

A person could turn the app’s notifications on or off at any time.  The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.

 

 

 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 
© 2024 | Daniel Christian