Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 
 

chatbots-wharton-june2016

The rise of the chatbots: Is it time to embrace them? — from knowledge.wharton.upenn.edu

Excerpt:

The tech world is all agog these days about chatbots. These are automated computer programs that simulate online conversations with people to answer questions or perform tasks. While chatbots have been around in various rudimentary forms for years — think of Clippy, Microsoft’s paper clip virtual assistant — they have been taking off lately as advances in machine learning and artificial intelligence make them more versatile than ever. Among the most well-known chatbots: Apple’s Siri.

In rapid succession over the past few months, Microsoft, Facebook and Google have each unveiled their chatbot strategies, touting the potential for this evolving technology to aid users and corporate America with its customer-service capabilities as well as business utility features like organizing a meeting. Yahoo joined the bandwagon recently, launching its first chatbots on a chat app called Kik Messenger.

 

 

 

Bill Gates says the next big thing in tech can help people learn like he does — from businessinsider.com by Matt Weinberger

Excerpt (emphasis DSC):

In a new interview with The Verge, Microsoft cofounder and richest man in the world Bill Gates explained the potential for chatbots programs you can text with like they’re human — in education.

Gates lauds the potential for what he calls “dialogue richness,” where an chatbot can really hold a conversation with a student, essentially making it into a tutor that can walk them through even the toughest, most subjective topics. 

It’s actually similar to how Gates himself likes to learn, he tells The Verge…

 

 

The complete beginner’s guide to chatbots — from chatbotsmagazine.com by Matt Schlicht
Everything you need to know.

Excerpt (emphasis DSC):

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

What is a chatbot?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.

Examples of Chat Bots
Weather bot. Get the weather whenever you ask.
Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
Life advice bot. I’ll tell it my problems and it helps me think of solutions.
Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.

 

 

 

Chatbots explained: Why Facebook and other tech companies think they’re the biggest thing since the iPhone — from businessinsider.com by Biz Carson

Excerpt:

Chatbots are the future, whether we’re ready for them or not.

On Tuesday (April 5, 2016) , Facebook launched Bots for Messenger, a step that could define the next decade in the same way that the Apple App Store launch paved the path for companies like Uber to build a business off your phone. Its new messaging platform will help businesses build intelligent chatbots to let them communicate in Messenger.

“Today could be the beginning of a new era,” said Facebook Messenger chief David Marcus.

So what are these chatbots, and why is everyone obsessed?

 

 

 

Facebook wants to completely revolutionize the way you talk to businesses — from businessinsider.com by Jillian D’Onfro

 

 

 

Bot wars: Why big tech companies want apps to talk back to you — from fastcompany.com by Jared Newman
Can a new wave of chatbots from Facebook and Microsoft upend apps as we know them, or is that just wishful thinking?

Excerpt:

The rise of conversational “chatbots” begins with a claim you might initially dismiss as preposterous. “Bots are the new apps,” Microsoft CEO Satya Nadella declared during the company’s Build developers conference last month. “People-to-people conversations, people-to-digital assistants, people-to-bots, and even digital assistants-to-bots. That’s the world you’re going to get to see in the years to come.”

 

 

 

Microsoft CEO Nadella: ‘Bots are the new apps’

Excerpt:

SAN FRANCISCO – Microsoft CEO Satya Nadella kicked off the company’s Build developers conference with a vision of the future filled with chatbots, machine learning and artificial intelligence.

“Bots are the new apps,” said Nadella during a nearly three-hour keynote here that sketched a vision for the way humans will interact with machines. “People-to-people conversations, people-to-digital assistants, people-to-bots and even digital assistants-to-bots. That’s the world you’re going to get to see in the years to come.”

Onstage demos hammered home those ideas. One involved a smartphone conversing with digital assistant Cortana about planning a trip to Ireland, which soon found Cortana bringing in a Westin Hotels chatbot that booked a room based on the contents of the chat.

 

 

 

 


 

Addendums on 6/17/16:

 

HolographicStorytellingJWT-June2016

HolographicStorytellingJWT-2-June2016

 

Holographic storytelling — from jwtintelligence.com by Jade Perry

Excerpt (emphasis DSC):

The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.

New Dimensions in Testimony’ is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.

Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from Conscience Display, viewers were able to ask Gutter’s holographic image questions that triggered relevant responses.

 

 

From DSC:
I wonder…is this an example of a next generation, visually-based chatbot*?

With the growth of artificial intelligence (AI), intelligent systems, and new types of human computer interaction (HCI), this type of concept could offer an on-demand learning approach that’s highly engaging — and accessible from face-to-face settings as well as from online-based learning environments. (If it could be made to take in some of the context of a particular learner and where a learner is in the relevant Zone of Proximal Development (via web-based learner profiles/data), it would be even better.)

As an aside, is this how we will obtain
customer service from the businesses of the future? See below.

 


 

 

*The complete beginner’s guide to chatbots — from chatbotsmagazine.com by Matt Schlicht
Everything you need to know.

Excerpt (emphasis DSC):

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

What is a chatbot?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.

Examples of chatbots
Weather bot. Get the weather whenever you ask.
Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
Life advice bot. I’ll tell it my problems and it helps me think of solutions.
Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.

 

 

CNBCDisruptorsTop50-2016

CNBCDisruptorsTop50-2016-2

 

Meet the 2016 CNBC Disruptor 50 companies — from cnbc.com
CNBC reveals the 2016 Disruptor 50 list, identifying start-ups out ahead of big consumer and business shifts, and already worth billions.

Excerpt:

In the fourth annual Disruptor 50 list, CNBC features private companies in 15 industries — from aerospace to financial services to cybersecurity to retail — whose innovations are revolutionizing the business landscape. These forward-thinking starts-ups have identified unexploited niches in the marketplace that have the potential to become billion-dollar businesses, and they rushed to fill them. Some have already passed the billion-dollar mark at a speed that is unprecedented. In the process, they are creating new ecosystems for their products and services. Unseating corporate giants is no easy feat. But we ranked those venture capital–backed companies doing the best job. In aggregate, these 50 companies have raised $41 billion in venture capital at an implied Disruptor 50 list market valuation of $242 billion, according to PitchBook data. Already it’s hard to think of the world without them. Read more about the consumer and business trends that stand out in the 2016 list ranking and the methodology used to select this year’s Disruptor companies.

 

 

 

WWDC-2016-Apple

 

Keynote address:

keynoteaddress-wwdc-2016

From Apple:

 

 

These are Apple’s big announcements from WWDC 2016 — from imore.com by Joseph Keller

Excerpt:

Apple made several interesting announcements today during its WWDC 2016 keynote. Here are the major announcements from the event.

  • iOS 10 — the big focus with this release is on Siri.
  • macOS Sierra — The next version of Apple’s operating system for desktops and laptops is dropping the ‘X’, opting instead for the new ‘macOS’ branding. With macOS, Siri makes its debut on Apple’s traditional computers for the first time.
  • watchOS
  • tvOS — The next major version of the Apple TV’s software will offer single sign-on for cable logins, along with its own dark mode. There will also be a number of Siri enhancements, as well as improvements for watching live TV.

 

 

Highlights from Apple’s WWDC 2016 Keynote — from fastcompany.com
From Messages to Music, and Siri to Apple Pay on the web, here are the most important announcements from Apple’s event today.

 

 

Apple launches Swift Playgrounds for iPad to teach kids to code — from techcrunch.com by Frederic Lardinois

Excerpt:

Apple today announced Swift Playgrounds for the iPad, a new project that aims to teach kids to code in Swift.

When you first open it, Swift Playground presents you with a number of basic coding lessons, as well as challenges. The interface looks somewhat akin to Codecademy, but it’s far more graphical and playful, which makes sense, given that the target audience is kids. Most of the projects seem to involve games and fun little animations to keep kids motivated.

To make coding on the iPad a bit easier, Apple is using a special keyboard with a number of shortcuts and other features that will make it easier to enter code.

Also see:

SwiftPlaygroundsFromApple-6-13-16

 

 

What’s new in iOS 10: Siri and Maps open to developers, machine learning and more — from arc.applause.com

Excerpts:

The biggest news for Siri from the WWDC keynote: Apple’s assistant is now open to third party developers.

Apple is now opening Siri to all of those potential interactions for developers through SiriKit. Siri will be able to access messaging, photos, search, ride booking through Uber or Lyft etc., payments, health and fitness and so forth. Siri will also be incorporated into Apple CarPlay apps to make it easier to interact with the assistant while driving.

 

 

Photos is getting a machine learning boost and automatic ‘Memories’ albums — from imore.com by Dan Thorp-Lancaster

Excerpt:

Speaking at WWDC 2016, Apple announced that it is bringing the power of machine learning to the Photos app in iOS 10. With machine learning, the Photos app will include object and scene recognition thanks to what Apple calls “Advanced Computer Vision.” For example, the app will be able to automatically pick out specific animal, features and more. Facial recognition is also available, all done locally on the iPhone with automatic people albums.

 

 

Home is a new way to control all of your HomeKit-enabled accessories — from imore.com by Jared DiPane

Excerpt:

Apple has announced its newest and easiest way to control any and all HomeKit accessories that you may have in your house: Home. With Home, you’ll be able to control all of your accessories, including Air Conditioners, cameras, door locks and other new categories. 3D Touch will be able to give you deeper controls at just a press, and notifications from these will also have 3D Touch functionality as well.

 

 

Here’s what Apple is bringing to the Apple TV — from fastcompany.com
tvOS is taking a step forward with updates to Siri, and new features such as single sign on, dark mode, and more.

 

NumbertvOS-Apps-6000

Excerpt:

Less than nine months after the first version of tvOS, there are now over 6,000 native apps for the Apple TV. Of those apps, 1,300 are for streaming video. Popular over-the-top Internet television service Sling TV arrives on the Apple TV today. Live Fox Sports Go streaming will come this summer. Speaking of apps: Apple is introducing a new Apple TV Remote app for iOS that allows people to navigate tvOS using Siri from their iPhone and iPad.

Download apps on iPhone and get them on Apple TV
Now when you download an app on your iPad or iPhone, if there is an Apple TV version of the app, it will download to your Apple TV automatically.

 

 

tvOS 10 FAQ: Everything you need to know! — from imore.com by Lory Gil Mo

 

 

Apple iOS 10 “Memories” turns old photos into editable mini-movies — from techcrunch.com by Josh Constine
Using local, on-device facial recognition and AI detection of what’s in your images, it can combine photos and videos into themed mini-movies complete with transitions and a soundtrack.

 

 

Apple announces iOS 10 — from techcrunch.com

 

 

Apple launches iMessage Apps so third-party devs can join your convos — from techcrunch.com by Jordan Crook

 

 

Don’t brick your iPhone, iPad, Mac, or Apple Watch by installing developer betas — from imore.com by Serenity Caldwell

Excerpt:

As a reminder: You shouldn’t install developer betas on your primary devices if you want them to work.

This is our yearly reminder, folks: Unless you’re a developer with a secondary iPhone or Mac, we strongly, strongly urge you to consider not installing developer betas on your devices.

It’s not because we don’t want you to have fun: iOS 10, watchOS, tvOS, and macOS have some phenomenal features coming this Fall. But they’re beta seeds for a reason: These features are not fully baked, may crash at will, and probably will slow down or crash your third-party applications.

 

 

You already have the ultimate Apple TV remote, and it’s in your pocket — from techradar.com by Jon Porter

 

 

Apple quietly outs ‘next-generation’ file system destined for full product lineup — from imore.com by Dan Thorp-Lancaster

 

 

watchOS 3 FAQ: Everything you need to know — from imore.com by Mikah Sargent

 

 


 

Addendum on 6/15/16:

 

Bringing Outlook Mail and Calendar to Microsoft HoloLens — from blogs.office.com
and
Microsoft Outlook makes its Augmented-Reality (AR) debut on HoloLens — from eweek.com by Pedro Hernandez

MailCalendarToHoloLens-June2016

 

 

 

Introducing the New Blippar App: The power of visual discovery — from blippar.com

Excerpt:

Blipparsphere is our new proprietary knowledge graph baked right into the Blippar app as of today. It builds on our existing computer vision and machine learning technology to capture deeper and more rich information about the world around you.

 

 

 

This augmented reality app will now have an added feature — from by Sneha Banerjee
Blipparsphere is live now in the Blippar app for iOS and Android.

Excerpt:

This technology startup, which specializes in augmented reality, artificial intelligence and computer vision, launched Blipparsphere. This is a new proprietary knowledge graph technology, which is live now on the Blippar app. Blipparsphere builds on the company’s existing machine learning and computer vision capabilities to deepen and personalize information about a user’s physical surroundings, providing a true visual discovery browser through the app.

 

 

 

5 top virtual reality & augmented reality technology trends for 2016 — from marxentlabs.com by Joe Bardi

Excerpt:

What are the top Virtual Reality and Augmented Reality technology trends for 2016?
2015 was a year of tantalizing promise for Virtual Reality and Augmented Reality technology, with plenty of new hardware announced and initial content forays hitting the mainstream. 2016 is shaping up as the year that promise is fulfilled, with previously drooled over hardware finally making its way into the hands of consumers, and exciting new content providing unique experiences to a public hungry to experience this new technology.  So where are Virtual Reality and Augmented Reality headed in 2016? Here’s our top 5 emerging trends…

 

 

 

CASE STUDY: How Lowe’s used the PIONEERS framework to lead a successful augmented reality and virtual reality project — from marxentlabs.com by Beck Besecker

Excerpt:

PIONEERS Case Study: Lowe’s Innovation Labs
By following the PIONEERS framework, Lowe’s Innovation Labs was able to create a brand new buying experience that is wowing customers. A project of Lowe’s Innovation Labs, the Lowe’s Holoroom is powered by Marxent’s VisualCommerce™, which enables Lowe’s to fill the Holoroom’s 3D space with virtual renderings of real products stocked by the home improvement retailer. Shoppers can design their perfect kitchen and then literally walk into it, share it via YouTube 360, and then buy the products that they’ve selected and turn their virtual design into reality.

 

 

 

College students experiment with virtual reality — from edtechmagazine.com by Eli Zimmerman
Innovative course curricula at three higher ed institutions give students hands-on practice with virtual reality.

 

College-VR-Experiments-May2016

 

 

 

HoloAnatomy app previews use of augmented reality in medical schools — from medgadget.com

Excerpt:

The Cleveland Clinic has partnered with Case Western Reserve University to release a Microsoft HoloLens app that allows users to explore the human body using augmented reality technology. The HoloLens is a headset that superimposes computer generated 3D graphics onto a person’s field of view, essentially blending reality with virtual reality.

The HoloAnatomy app lets people explore a virtual human, to walk around it looking at details of different systems of the body, and to select which are showing. Even some sounds are replicated, such as that of the beating heart.

 

 

 

 

 

 

Will “class be in session” soon on tools like Prysm & Bluescape? If so, there will be some serious global interaction, collaboration, & participation here! [Christian]

From DSC:
Below are some questions and thoughts that are going through my mind:

  • Will “class be in session” soon on tools like Prysm & Bluescape?
  • Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
  • Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
  • Prysm is already available on mobile devices and what we consider a television continues to morph
  • Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
  • Will colleges and universities innovate into such setups?  Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
  • How will training, learning and development groups leverage these tools/technologies?
  • Are there some opportunities for homeschoolers here?

Along these lines, are are some videos/images/links for you:

 

 

PrysmVisualWorkspace-June2016

 

PrysmVisualWorkspace2-June2016

 

BlueScape-2016

 

BlueScape-2015

 

 



 

 

DSC-LyndaDotComOnAppleTV-June2016

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

Also see:

kitchenstories-AppleTV-May2016

 

 

 

 


 

Also see:

 


Prysm Adds Enterprise-Wide Collaboration with Microsoft Applications — from ravepubs.com by Gary Kayye

Excerpt:

To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.

 


 

 

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

Top 10 future technology jobs: VR developer, IoT specialist and AI expert — from v3.co.uk; with thanks to Norma Owen for this resource
V3 considers some of the emerging technology jobs that could soon enter your business

Top 10 jobs:

10. VR developer
9.   Blockchain engineer/developer
8.   Security engineer
7.   Internet of Things architect
6.   UX designer
5.   Data protection officer
4.   Chief digital officer
3.   AI developer
2.   DevOps engineer
1.   Data scientist

 

Amazon now lets you test drive Echo’s Alexa in your browser — from by Dan Thorp-Lancaster

Excerpt:

If you’ve ever wanted to try out the Amazon Echo before shelling out for one, you can now do just that right from your browser. Amazon has launched a dedicated website where you can try out an Echo simulation and put Alexa’s myriad of skills to the test.

 

Echosimio-Amazon-EchoMay2016

 

 

From DSC:
The use of the voice and gesture to communicate to some type of computing device or software program represent growing types of Human Computer Interaction (HCI).  With the growth of artificial intelligence (AI), personal assistants, and bots, we should expect to see more voice recognition services/capabilities baked into an increasing amount of products and solutions in the future.

Given these trends, personnel working within K-12 and higher ed need to start building their knowledgebases now so that we can begin offering more courses in the near future to help students build their skillsets.  Current user experience designers, interface designers, programmers, graphic designers, and others will also need to augment their skillsets.

 

 

 

Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource

 

The future of interaction is multimodal.

 

Excerpts:

The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.

Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?

Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.

 

 

Forget about buttons — think actions.

 

 

Eliminate the need for a cursor as feedback, but provide an alternative.

 

 

 

 

Now you can build your own Amazon Echo at home—and Amazon couldn’t be happier — from qz.com by Michael Coren

Excerpt:

Amazon’s $180 Echo and the new Google Home (due out later this year) promise voice-activated assistants that order groceries, check calendars and perform sundry tasks of your everyday life. Now, with a little initiative and some online instructions, you can build the devices yourself for a fraction of the cost. And that’s just fine with the tech giants.

At this weekend’s Bay Area Maker Faire, Arduino, an open-source electronics manufacturer, announced new hardware “boards”—bundles of microprocessors, sensors, and ports—that will ship with voice and gesture capabilities, along with wifi and bluetooth connectivity. By plugging them into the free voice-recognition services offered by Google’s Cloud Speech API and Amazon’s Alexa Voice Service, anyone can access world-class natural language processing power, and tap into the benefits those companies are touting. Amazon has even released its own blueprint and code repository to build a $60 version of its Echo using Raspberry Pi, another piece of open-source hardware.

 

From DSC:
Perhaps this type of endeavor could find its way into some project-based learning out there, as well as in:

  • Some Computer Science-related courses
  • Some Engineering-related courses
  • User Experience Design bootcamps
  • Makerspaces
  • Programs targeted at gifted students
  • Other…??

 

 

 
© 2025 | Daniel Christian