From DSC:
When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?

What does it mean for:

  • Students / learners
  • Faculty members
  • Teachers
  • Trainers
  • Instructional Designers
  • Interaction Designers
  • User Experience Designers
  • Curriculum Developers
  • …and others?

Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….

 

 

NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota

Excerpt:

Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.

The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.

Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.

 

 

nasa-alexa-11-29-16

 

 


Also see:


 

What Is Alexa? What Is the Amazon Echo, and Should You Get One? — from thewirecutter.com by Grant Clauser

 

side-by-side2

 

 

Amazon launches new artificial intelligence services for developers: Image recognition, text-to-speech, Alexa NLP — from geekwire.com by Taylor Soper

Excerpt (emphasis DSC):

Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services

At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.

The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa. Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.

 

 

Amazon announces three new AI services, including a text-to-voice service, Amazon Polly  — from by D.B. Hebbard

 

 

AWS Announces Three New Amazon AI Services
Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today

Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages

Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition

Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services

Excerpt:

SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.

To learn more about Amazon Lex, Amazon Polly, or Amazon Rekognition, visit:
https://aws.amazon.com/amazon-ai

 

 

 

 

 

LinkedIn ProFinder expands nationwide to help you hire freelancers — from blog.linkedin.com

Excerpt:

The freelance economy is on the rise. In fact, the number of freelancers on LinkedIn has grown by nearly 50% in just the past five years. As the workforce evolves, we, too, are evolving to ensure we’re creating opportunity for the expanding sector of professionals looking for independent, project-based work in place of the typical 9 to 5 profession.

Last October, we began piloting a brand new platform in support of this very endeavor and today, we’re excited to announce its nationwide availability. Introducing LinkedIn ProFinder, a LinkedIn marketplace that connects consumers and small businesses looking for professional services – think Design, Writing and Editing, Accounting, Real Estate, Career Coaching – with top quality freelance professionals best suited for the job.

 

 

Also see:

 

linkedin-profinder-aug2016

 

Also see:

 

40percentfreelancersby2020-quartz-april2013

 

Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource

 

The future of interaction is multimodal.

 

Excerpts:

The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.

Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?

Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.

 

 

Forget about buttons — think actions.

 

 

Eliminate the need for a cursor as feedback, but provide an alternative.

 

 

 

 

Creators of Siri reveal first public demo of AI assistant “Viv” — from seriouswonder.com by B.J. Murphy

Excerpts:

When it comes to AI assistants, a battle has been waged between different companies, with assistants like Siri, Cortana, and Alexa at the forefront of the battle. And now a new potential competitor enters the arena.

During a 20 minute onstage demo at Disrupt NYC, creators of Siri Dag Kittlaus and Adam Cheyer revealed Viv – a new AI assistant that makes Siri look like a children’s toy.

 

“Viv is an artificial intelligence platform that enables developers to distribute their products through an intelligent, conversational interface. It’s the simplest way for the world to interact with devices, services and things everywhere. Viv is taught by the world, knows more than it is taught, and learns every day.”

 

VIV-2-May2016

 

 

From DSC:
I saw a posting at TechCrunch.com the other day — The Information Age is over; welcome to the Experience Age.  In terms of why I’m mentioning that article here, the content of that article is not what’s as relevant here as the title of the article.  An interesting concept…and probably spot on; with ramifications for numerous types of positions, skillsets, and industries from all over the globe.

Also see:

 

VIV-May2016

 

 

Addendum on 5/12/16:

  • New Siri sibling Viv may be next step in A.I. evolution — from computerworld.com by Sharon Gaudin
    Excerpt:
    With the creators of Siri offering up a new personal assistant that won’t just tell you what pizza is but can order one for you, artificial intelligence is showing a huge leap forward. Viv is an artificial intelligence (AI) platform built by Dag Kittlaus and Adam Cheyer, the creators of the AI behind Apple’s Siri, the most well-known digital assistant in the world. Siri is known for answering questions, like how old Harrison Ford is, and reminding you to buy milk on the way home. Viv, though, promises to go well past that.

 

 

A New Morning — by Magic Leap; posted on 4/19/16
Welcome to a new way to start your day. Shot directly through Magic Leap technology on April 8, 2016 without use of special effects or compositing.

 

MagicLeap-ANewMorning-April2016

 

Also see:

 

 

 

 
 

How Google is reimagining books — from fastcodesign.com by Meg Miller
Editions at Play” sees designers and authors working simultaneously to build a new type of e-book from the ground up.

Excerpt:

The first sign that Reif Larsen’s Entrances & Exits is not a typical e-book comes at the table of contents, which is just a list of chapters titled “Location Unknown.” Click on one of them, and you’ll be transported to a location (unknown) inside Google Street View, facing a door. Choose to enter the house and that’s where the narrative, a sort of choose-your-own-adventure string of vignettes, begins. As the book’s description reads, it’s a “Borgeian love story” that “seamlessly spans the globe” and it represents a fresh approach to the book publishing industry.

Larsen’s book is one of the inaugural titles from Editions at Play, a joint e-books publishing venture between Google Creative Lab Sydney and the design-driven publishing house Visual Editions, which launched this week. With the mission of reimagining what an e-book can be, Editions at Play brings together the author, developers, and designers to work simultaneously on building a story from the ground up. They are the opposite of the usual physical-turned-digital-books; rather, they’re books that “cannot be printed.”

 

From DSC:
Interesting to note the use of teams of specialists here…

 

 

 

Winner revealed for Microsoft’s HoloLens App Competition — from vrfocus.com by Peter Graham

Excerpt:

Out of the thousands of ideas entered, Airquarium, Grab the Idol and Galaxy Explorer were the three that made it through. Out of those the eventual winner was Galaxy Explorer with a total of 58 per cent of the votes. The app aims to give users the ability to wander the Milky Way and learn about our galaxy. Navigating through the stars and landing on the myriad of planets that are out there.

 

 

 

Also see:

Virtual Reality in 2016: What to expect from Google, Facebook, HTC and others in 2016 — from tech.firstpost.com by Naina Khedekar

Excerpt:

Many companies have made their intentions for virtual reality clear. Let’s see what they are up to in 2016.

 

 

Also see:

 

Somewhat related, but in the AR space:

Except:
Augmented reality(AR) has continued to gain momentum in the educational landscape over the past couple of years. These educators featured below have dove in head first using AR in their classrooms and schools. They continue to share excellent resources to help educators see how augmented reality can engage students and deepen understanding.

 

 

 

Intel launches x-ray-like glasses that allow wearers to ‘see inside’ objects — from theguardian.com by
Smart augmented reality helmet allows wearers to overlay maps, schematics and thermal images to effectively see through walls, pipes and other solid objects

Excerpt:

Unlike devices such as HoloLens or Google Glass, which have been marketed as consumer devices, the Daqri Smart Helmet is designed with industrial use in mind. It will allow the wearer to effectively peer into the workings of objects using real-time overlay of information, such as wiring diagrams, schematics and problem areas that need fixing.

 

 

Top 20 User Experience Blogs and Resources of 2015 — from usabilitygeek.com by Matt Ellis

Example resources from that posting:

 

ux-design-feb2016

 

 

ux-design2-feb2016

 

 

 

ux-design3-feb2016

BONUS: 15 Honorable Mentions

The following is a list of other excellent user experience blogs that did not make it in this list but are so good that they are still worth a mention. So please be sure to check them out as well!

 

From DSC:
Though I’m sure this list is missing many talented folks and firms, it’s a great place to start learning about user experience design, interface design, interaction design, prototyping, and usability testing.

 

 

 

From DSC:
Currently, you can add interactivity to your digital videos. For example, several tools allow you to do this, such as:

So I wonder…what might interactivity look like in the near future when we’re talking about viewing things in immersive virtual reality (VR)-based situations?  When we’re talking about videos made using cameras that can provide 360 degrees worth of coverage, how are we going to interact with/drive/maneuver around such videos? What types of gestures and/or input devices, hardware, and software are we going to be using to do so? 

What new forms of elearning/training/education will we have at our disposal? How will such developments impact instructional design/designers? Interaction designers? User experience designers? User interface designers? Digital storytellers?

Hmmm…

The forecast?  High engagement, interesting times ahead.

Also see:

  • Interactive video is about to get disruptive — from kineo.com by James Cory-Wright
    Excerpt:
    Seamless and immersive because it all happens within the video
    We can now have embedded hotspots (motion tags) that move within the video; we can use branching within the video to change the storyline depending on the decisions you make; we can show consequences of making that decision; we can add video within video to share expert views, link directly to other rich media or gather real-time data via social media tools – all without leaving the actual video. A seamless experience.
    .
  • Endless learning: Virtual reality in the classroom — from pixelkin.org by David Jagneaux
    Excerpt:
    What if you could be part of the audience for Martin Luther King Jr.’s riveting “I Have a Dream” speech? What if you could stand in a chemistry lab and experiment without any risk of harm or danger? What if you could walk the earth millions of years ago and watch dinosaurs? With virtual reality technology, these situations could become real. Virtual reality (VR) is a hot topic in today’s game industry, but games are only one aspect of the technology. I’ve had fun putting  on a headset and shooting  down ships in outer space. But VR also has the potential to enhance education in classrooms.
    .
  • Matter VR

 

MatterVR-Jan2016

 

From DSC:
The folks who worked on the Understood.org site have done a great job! Besides the wonderful resources therein, I really appreciated the user experience one is able to get by using their site. Check out the interface and functionality you can experience there (and which I’ve highlighted below).

 

We need a similar interface for matching up pedagogies with technologies.

 

DSC-NeedThisInterface4SelectingTechs4Pedagogies--July2015-flat

 

 

 

Also, there are some excellent accessibility features:

 

 

Understood-July2015

 

 

From DSC:
When you read the article below, you’ll see why I’m proposing the aforementioned interface/service/database — a service that would be similar to Wikipedia in terms of allowing many people to contribute to it.

 

Why ed tech is not transforming how teachers teach  — from edweek.org by Benjamin Herold
Student-centered, technology-driven instruction remains elusive for most

Excerpt:

Public schools now provide at least one computer for every five students. They spend more than $3 billion per year on digital content. And nearly three-fourths of high school students now say they regularly use a smartphone or tablet in the classroom.

But a mountain of evidence indicates that teachers have been painfully slow to transform the ways they teach, despite that massive influx of new technology into their classrooms. The student-centered, hands-on, personalized instruction envisioned by ed-tech proponents remains the exception to the rule.

 

Everything you need to know from today’s Apple WWDC Keynote — from techcrunch.com by Greg Kumparak

Excerpts:

  • The Next Version Of OS X — Apple announced OS X 10.11, or “OS X El Capitan”.
  • iOS 9
  • Siri is getting smarter
  • Split Screen iPad Apps: Perhaps the biggest feature of all, and something that has been rumored for ages: the iPad is getting split screen apps. You’ll now be able to run two apps at once, side by side. (Split screen iPad apps are limited to the latest/most powerful iPad hardware: the iPad Air 2)
  • Picture in picture video
  • CarPlay goes wireless
  • Swift 2: Apple also announce Swift 2, the second iteration of their new programming language.
  • watchOS 2
  • Apple Music: As long rumored, Apple is launching a Spotify/Rdio competitor. It’s $9.99 per month, or $14.99 on a family plan (with support for up to 6 accounts)

 

 

The Apple Watch just got a lot more useful — from fastcompany.com by John Paul Titlow
Apple’s WatchOS 2 is here and it will let developers build native, much more capable apps.

If you had your doubts about the Apple Watch, Cupertino just made it much more interesting. At the Worldwide Developers Conference this morning, Apple announced WatchOS 2, a new version of the watch’s operating system that lets developers build native apps for the device.

Watch OS 2 will also ship with a number of new features, like the ability to set photo watch faces, reply to email, take FaceTime audio calls, use Apple Pay’s new virtual loyalty cards, and get mass transit directions.

 

 

Apple introduces “News”: An old idea with big potential — from fastcompany.com by John Paul Titlow
WWDC 2015 Update: Apple just unveiled News, a Flipboard-Style news aggregation app that will ship with iOS 9.

Excerpt:

Apple is now a Flipboard competitor. When iOS 9 ships in the fall, it will include a new app called News, a newspaper, magazine, and blog aggregator that will feel very familiar to anyone who’s ever used Flipboard, Pulse, or one of the many other apps of this nature. The new app was unveiled by Apple’s vice president of Product Marketing, Susan Prescott, this afternoon at the company’s Worldwide Developers Conference.

 

 

Apple Maps in iOS 9 adds public transit, local business search — from cio.com by Marco Tabini

 

 

 

Apple-6-8-15

 

AppleHomekit-Watch-6-8-15

 

Kevin shows some of the great things developers can do with the new version of WatchKit.

 

MicrosoftProductivityVision2015

 

Example snapshots from
Microsoft’s Productivity Future Vision

 

 

MicrosoftProductivityVision2-2015

 

MicrosoftProductivityVision3-2015

 

MicrosoftProductivityVision5-2015

 

MicrosoftProductivityVision6-2015

 

MicrosoftProductivityVision7-2015

 

MicrosoftProductivityVision8-2015

 

MicrosoftProductivityVision4-2015

 

 

 
© 2024 | Daniel Christian