From DSC:
After reading the item below, I wondered:

Should technical communicators, trainers, and help desk personnel get trained on how to design and develop “workbots?”


 

Forget chatbots — you should create a workbot instead — from venturebeat.com by Oren Ariel; with thanks to Thomas Frey for his tweet on this

Excerpts (emphasis DSC):

But what about employee-to-company interaction through bots? Chatbots designed for the work environment, or workbots, could become the next step function in work productivity.

Workbots could be the cure for what’s often called “app fatigue.”

They work within the corporate messenger environment (such as Jabber, Skype for Business, Slack, and others) and respond to commands and questions in natural language, whether typed or dictated. They have access to all the corporate information needed to get the job done and can perform complex tasks across multiple systems. The workbot knows what tasks are executed in which back-end system, so the user doesn’t have to know. Because bots rely on natural language processing (NLP) — the ability of humans to interact with computers using free-form language — workbots can help an employee get to the starting point quickly and without any training, in the same way a search engine would, and then help guide the user through the task in a step-by-step fashion.

Chat is no longer just about communication, it’s about bringing the user information.

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course ?(meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation ?(using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

An Artificial Intelligence Developed Its Own Non-Human Language — from theatlantic.com by Adrienne LaFrance
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Excerpt:

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.

 

 

 
 

From DSC:
There are now more than 12,000+ skills on Amazon’s new platform — Alexa.  I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?


 

From this Alexa Skills Kit page:

What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.

What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

You can build and host most skills for free using Amazon Web Services (AWS).

 

 

 


 

 

7 things you should know about artificial intelligence in teaching and learning — from Educause Learning Initiative (ELI)

Abstract:

The term artificial intelligence (AI) refers to computer systems that undertake tasks usually thought to require human cognitive processes and decision-making capabilities. To exhibit intelligence, computers apply algorithms to find patterns in large amounts of data—a process called machine learning, which plays a key role in a number of AI applications. AI learning agents have the potential to function like adaptive learning but at a much more sophisticated and nuanced level, potentially giving every student a computer-simulated personal mentor. Many colleges and universities are developing AI projects that aid teaching and learning.

 

7 things you should know about the evolution of teaching and learning professions — from Educause Learning Initiative (ELI)

Abstract

For this issue of the 7 Things, we asked a set of seven community leaders—who come from different walks of life in the community—to offer a short meditation on the evolution of the profession. In this issue you will find comments from professionals such as an instructional designer, a CIO, an accessibility expert, and a librarian. We hope that this issue and the spotlight it casts on the evolution of our profession will encourage us to begin further conversations about where we are headed and how we can help one another to achieve our professional goals.

 

Chief information officers are fast becoming chief innovation officers. It is increasingly critical for the CIO to be an advocate and leader of transformational change on campus rather than a director and manager of IT operations.

A key “big picture” area is the mission of teaching and learning. How do the systems we select today enable improved learning opportunities over the next three years? Will this solution empower students and faculty for years to come or merely meet a tactical need today?
There are increasing opportunities for librarians to work as partners with faculty to develop challenging assignments that encourage students to create a project with an output of a video, podcast, website, data visualization, blog, or other format.
“Support” connotes a hierarchy that doesn’t recognize that staff are valuable assets who play an important role in postsecondary education. We need to find a new language that promotes the ethos of service and servant leadership, within the context of describing ourselves as non-faculty educators and alternative academics.
Once, we thought the faculty role was expanding such that instructors would become learning designers and proto-technologists. Instead, an increasingly competitive and austere landscape is putting competing pressures on faculty, either around research expectations or expanded teaching responsibilities, preventing most from expanding their roles. 
 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs [Meghan Bogardus Cortez at edtechmagazine.com]

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs — from edtechmagazine.com by Meghan Bogardus Cortez
These administrative all-stars, IT gurus, teachers and community experts understand how the latest technology is changing the nature of education.

Excerpt:

With summer break almost here, we’ve got an idea for how you can use some of your spare time. Take a look at the Dean’s List, our compilation of the must-read blogs that seek to make sense of higher education in today’s digital world.

Follow these education trailblazers for not-to-be-missed analyses of the trends, challenges and opportunities that technology can provide.

If you’d like to check out the Must-Read IT blogs from previous years, view our lists from 2016, 2015, 2014 and 2013.

 

 



From DSC:
I would like to thank Tara Buck, Meghan Bogardus Cortez, D. Frank Smith, Meg Conlan, and Jimmy Daly and the rest of the staff at EdTech Magazine for their support of this Learning Ecosystems blog through the years — I really appreciate it. 

Thanks all for your encouragement through the years!



 

 

 

 

What to look for when hiring an entry-level data scientist? — from datasciencecentral.com

Excerpt:

What I look for the most is some signal that the junior data scientist:

  1. Has the drive and determination to be a self-directed learner
  2. They understand the fundamentals of “enough” programming,
  3. They understand how to analyze data when the goals and metrics are not explicit or time boxed.

Let’s put aside the need for some level of formal training, that is a non-negotiable baseline. You have to have enough understanding of mathematics and statistics to know when you are getting yourself into trouble, you have to understand data management practice enough to understand how to access data, and you have to understand enough about machine learning to make the appropriate series of tradeoffs in model development and validation. That is table stakes, however what makes one candidate stand out above the others is everything else surrounding these core concepts.

 

Also see:

Image from Ronald van Loon

 

 

 

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 
© 2016 Learning Ecosystems