CNBCDisruptorsTop50-2016

CNBCDisruptorsTop50-2016-2

 

Meet the 2016 CNBC Disruptor 50 companies — from cnbc.com
CNBC reveals the 2016 Disruptor 50 list, identifying start-ups out ahead of big consumer and business shifts, and already worth billions.

Excerpt:

In the fourth annual Disruptor 50 list, CNBC features private companies in 15 industries — from aerospace to financial services to cybersecurity to retail — whose innovations are revolutionizing the business landscape. These forward-thinking starts-ups have identified unexploited niches in the marketplace that have the potential to become billion-dollar businesses, and they rushed to fill them. Some have already passed the billion-dollar mark at a speed that is unprecedented. In the process, they are creating new ecosystems for their products and services. Unseating corporate giants is no easy feat. But we ranked those venture capital–backed companies doing the best job. In aggregate, these 50 companies have raised $41 billion in venture capital at an implied Disruptor 50 list market valuation of $242 billion, according to PitchBook data. Already it’s hard to think of the world without them. Read more about the consumer and business trends that stand out in the 2016 list ranking and the methodology used to select this year’s Disruptor companies.

 

 

 

WWDC-2016-Apple

 

Keynote address:

keynoteaddress-wwdc-2016

From Apple:

 

 

These are Apple’s big announcements from WWDC 2016 — from imore.com by Joseph Keller

Excerpt:

Apple made several interesting announcements today during its WWDC 2016 keynote. Here are the major announcements from the event.

  • iOS 10 — the big focus with this release is on Siri.
  • macOS Sierra — The next version of Apple’s operating system for desktops and laptops is dropping the ‘X’, opting instead for the new ‘macOS’ branding. With macOS, Siri makes its debut on Apple’s traditional computers for the first time.
  • watchOS
  • tvOS — The next major version of the Apple TV’s software will offer single sign-on for cable logins, along with its own dark mode. There will also be a number of Siri enhancements, as well as improvements for watching live TV.

 

 

Highlights from Apple’s WWDC 2016 Keynote — from fastcompany.com
From Messages to Music, and Siri to Apple Pay on the web, here are the most important announcements from Apple’s event today.

 

 

Apple launches Swift Playgrounds for iPad to teach kids to code — from techcrunch.com by Frederic Lardinois

Excerpt:

Apple today announced Swift Playgrounds for the iPad, a new project that aims to teach kids to code in Swift.

When you first open it, Swift Playground presents you with a number of basic coding lessons, as well as challenges. The interface looks somewhat akin to Codecademy, but it’s far more graphical and playful, which makes sense, given that the target audience is kids. Most of the projects seem to involve games and fun little animations to keep kids motivated.

To make coding on the iPad a bit easier, Apple is using a special keyboard with a number of shortcuts and other features that will make it easier to enter code.

Also see:

SwiftPlaygroundsFromApple-6-13-16

 

 

What’s new in iOS 10: Siri and Maps open to developers, machine learning and more — from arc.applause.com

Excerpts:

The biggest news for Siri from the WWDC keynote: Apple’s assistant is now open to third party developers.

Apple is now opening Siri to all of those potential interactions for developers through SiriKit. Siri will be able to access messaging, photos, search, ride booking through Uber or Lyft etc., payments, health and fitness and so forth. Siri will also be incorporated into Apple CarPlay apps to make it easier to interact with the assistant while driving.

 

 

Photos is getting a machine learning boost and automatic ‘Memories’ albums — from imore.com by Dan Thorp-Lancaster

Excerpt:

Speaking at WWDC 2016, Apple announced that it is bringing the power of machine learning to the Photos app in iOS 10. With machine learning, the Photos app will include object and scene recognition thanks to what Apple calls “Advanced Computer Vision.” For example, the app will be able to automatically pick out specific animal, features and more. Facial recognition is also available, all done locally on the iPhone with automatic people albums.

 

 

Home is a new way to control all of your HomeKit-enabled accessories — from imore.com by Jared DiPane

Excerpt:

Apple has announced its newest and easiest way to control any and all HomeKit accessories that you may have in your house: Home. With Home, you’ll be able to control all of your accessories, including Air Conditioners, cameras, door locks and other new categories. 3D Touch will be able to give you deeper controls at just a press, and notifications from these will also have 3D Touch functionality as well.

 

 

Here’s what Apple is bringing to the Apple TV — from fastcompany.com
tvOS is taking a step forward with updates to Siri, and new features such as single sign on, dark mode, and more.

 

NumbertvOS-Apps-6000

Excerpt:

Less than nine months after the first version of tvOS, there are now over 6,000 native apps for the Apple TV. Of those apps, 1,300 are for streaming video. Popular over-the-top Internet television service Sling TV arrives on the Apple TV today. Live Fox Sports Go streaming will come this summer. Speaking of apps: Apple is introducing a new Apple TV Remote app for iOS that allows people to navigate tvOS using Siri from their iPhone and iPad.

Download apps on iPhone and get them on Apple TV
Now when you download an app on your iPad or iPhone, if there is an Apple TV version of the app, it will download to your Apple TV automatically.

 

 

tvOS 10 FAQ: Everything you need to know! — from imore.com by Lory Gil Mo

 

 

Apple iOS 10 “Memories” turns old photos into editable mini-movies — from techcrunch.com by Josh Constine
Using local, on-device facial recognition and AI detection of what’s in your images, it can combine photos and videos into themed mini-movies complete with transitions and a soundtrack.

 

 

Apple announces iOS 10 — from techcrunch.com

 

 

Apple launches iMessage Apps so third-party devs can join your convos — from techcrunch.com by Jordan Crook

 

 

Don’t brick your iPhone, iPad, Mac, or Apple Watch by installing developer betas — from imore.com by Serenity Caldwell

Excerpt:

As a reminder: You shouldn’t install developer betas on your primary devices if you want them to work.

This is our yearly reminder, folks: Unless you’re a developer with a secondary iPhone or Mac, we strongly, strongly urge you to consider not installing developer betas on your devices.

It’s not because we don’t want you to have fun: iOS 10, watchOS, tvOS, and macOS have some phenomenal features coming this Fall. But they’re beta seeds for a reason: These features are not fully baked, may crash at will, and probably will slow down or crash your third-party applications.

 

 

You already have the ultimate Apple TV remote, and it’s in your pocket — from techradar.com by Jon Porter

 

 

Apple quietly outs ‘next-generation’ file system destined for full product lineup — from imore.com by Dan Thorp-Lancaster

 

 

watchOS 3 FAQ: Everything you need to know — from imore.com by Mikah Sargent

 

 


 

Addendum on 6/15/16:

 

Bringing Outlook Mail and Calendar to Microsoft HoloLens — from blogs.office.com
and
Microsoft Outlook makes its Augmented-Reality (AR) debut on HoloLens — from eweek.com by Pedro Hernandez

MailCalendarToHoloLens-June2016

 

 

 

Introducing the New Blippar App: The power of visual discovery — from blippar.com

Excerpt:

Blipparsphere is our new proprietary knowledge graph baked right into the Blippar app as of today. It builds on our existing computer vision and machine learning technology to capture deeper and more rich information about the world around you.

 

 

 

This augmented reality app will now have an added feature — from by Sneha Banerjee
Blipparsphere is live now in the Blippar app for iOS and Android.

Excerpt:

This technology startup, which specializes in augmented reality, artificial intelligence and computer vision, launched Blipparsphere. This is a new proprietary knowledge graph technology, which is live now on the Blippar app. Blipparsphere builds on the company’s existing machine learning and computer vision capabilities to deepen and personalize information about a user’s physical surroundings, providing a true visual discovery browser through the app.

 

 

 

5 top virtual reality & augmented reality technology trends for 2016 — from marxentlabs.com by Joe Bardi

Excerpt:

What are the top Virtual Reality and Augmented Reality technology trends for 2016?
2015 was a year of tantalizing promise for Virtual Reality and Augmented Reality technology, with plenty of new hardware announced and initial content forays hitting the mainstream. 2016 is shaping up as the year that promise is fulfilled, with previously drooled over hardware finally making its way into the hands of consumers, and exciting new content providing unique experiences to a public hungry to experience this new technology.  So where are Virtual Reality and Augmented Reality headed in 2016? Here’s our top 5 emerging trends…

 

 

 

CASE STUDY: How Lowe’s used the PIONEERS framework to lead a successful augmented reality and virtual reality project — from marxentlabs.com by Beck Besecker

Excerpt:

PIONEERS Case Study: Lowe’s Innovation Labs
By following the PIONEERS framework, Lowe’s Innovation Labs was able to create a brand new buying experience that is wowing customers. A project of Lowe’s Innovation Labs, the Lowe’s Holoroom is powered by Marxent’s VisualCommerce™, which enables Lowe’s to fill the Holoroom’s 3D space with virtual renderings of real products stocked by the home improvement retailer. Shoppers can design their perfect kitchen and then literally walk into it, share it via YouTube 360, and then buy the products that they’ve selected and turn their virtual design into reality.

 

 

 

College students experiment with virtual reality — from edtechmagazine.com by Eli Zimmerman
Innovative course curricula at three higher ed institutions give students hands-on practice with virtual reality.

 

College-VR-Experiments-May2016

 

 

 

HoloAnatomy app previews use of augmented reality in medical schools — from medgadget.com

Excerpt:

The Cleveland Clinic has partnered with Case Western Reserve University to release a Microsoft HoloLens app that allows users to explore the human body using augmented reality technology. The HoloLens is a headset that superimposes computer generated 3D graphics onto a person’s field of view, essentially blending reality with virtual reality.

The HoloAnatomy app lets people explore a virtual human, to walk around it looking at details of different systems of the body, and to select which are showing. Even some sounds are replicated, such as that of the beating heart.

 

 

 

 

 

 

Will “class be in session” soon on tools like Prysm & Bluescape? If so, there will be some serious global interaction, collaboration, & participation here! [Christian]

From DSC:
Below are some questions and thoughts that are going through my mind:

  • Will “class be in session” soon on tools like Prysm & Bluescape?
  • Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
  • Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
  • Prysm is already available on mobile devices and what we consider a television continues to morph
  • Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
  • Will colleges and universities innovate into such setups?  Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
  • How will training, learning and development groups leverage these tools/technologies?
  • Are there some opportunities for homeschoolers here?

Along these lines, are are some videos/images/links for you:

 

 

PrysmVisualWorkspace-June2016

 

PrysmVisualWorkspace2-June2016

 

BlueScape-2016

 

BlueScape-2015

 

 



 

 

DSC-LyndaDotComOnAppleTV-June2016

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

Also see:

kitchenstories-AppleTV-May2016

 

 

 

 


 

Also see:

 


Prysm Adds Enterprise-Wide Collaboration with Microsoft Applications — from ravepubs.com by Gary Kayye

Excerpt:

To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.

 


 

 

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

Top 10 future technology jobs: VR developer, IoT specialist and AI expert — from v3.co.uk; with thanks to Norma Owen for this resource
V3 considers some of the emerging technology jobs that could soon enter your business

Top 10 jobs:

10. VR developer
9.   Blockchain engineer/developer
8.   Security engineer
7.   Internet of Things architect
6.   UX designer
5.   Data protection officer
4.   Chief digital officer
3.   AI developer
2.   DevOps engineer
1.   Data scientist

 

Amazon now lets you test drive Echo’s Alexa in your browser — from by Dan Thorp-Lancaster

Excerpt:

If you’ve ever wanted to try out the Amazon Echo before shelling out for one, you can now do just that right from your browser. Amazon has launched a dedicated website where you can try out an Echo simulation and put Alexa’s myriad of skills to the test.

 

Echosimio-Amazon-EchoMay2016

 

 

From DSC:
The use of the voice and gesture to communicate to some type of computing device or software program represent growing types of Human Computer Interaction (HCI).  With the growth of artificial intelligence (AI), personal assistants, and bots, we should expect to see more voice recognition services/capabilities baked into an increasing amount of products and solutions in the future.

Given these trends, personnel working within K-12 and higher ed need to start building their knowledgebases now so that we can begin offering more courses in the near future to help students build their skillsets.  Current user experience designers, interface designers, programmers, graphic designers, and others will also need to augment their skillsets.

 

 

 

Beyond touch: designing effective gestural interactions — from blog.invisionapp.com by by Yanna Vogiazou; with thanks to Mark Pomeroy for the resource

 

The future of interaction is multimodal.

 

Excerpts:

The future of interaction is multimodal. But combining touch with air gestures (and potentially voice input) isn’t a typical UI design task.

Gestures are often perceived as a natural way of interacting with screens and objects, whether we’re talking about pinching a mobile screen to zoom in on a map, or waving your hand in front of your TV to switch to the next movie. But how natural are those gestures, really?

Try not to translate touch gestures directly to air gestures even though they might feel familiar and easy. Gestural interaction requires a fresh approach—one that might start as unfamiliar, but in the long run will enable users to feel more in control and will take UX design further.

 

 

Forget about buttons — think actions.

 

 

Eliminate the need for a cursor as feedback, but provide an alternative.

 

 

 

 

Now you can build your own Amazon Echo at home—and Amazon couldn’t be happier — from qz.com by Michael Coren

Excerpt:

Amazon’s $180 Echo and the new Google Home (due out later this year) promise voice-activated assistants that order groceries, check calendars and perform sundry tasks of your everyday life. Now, with a little initiative and some online instructions, you can build the devices yourself for a fraction of the cost. And that’s just fine with the tech giants.

At this weekend’s Bay Area Maker Faire, Arduino, an open-source electronics manufacturer, announced new hardware “boards”—bundles of microprocessors, sensors, and ports—that will ship with voice and gesture capabilities, along with wifi and bluetooth connectivity. By plugging them into the free voice-recognition services offered by Google’s Cloud Speech API and Amazon’s Alexa Voice Service, anyone can access world-class natural language processing power, and tap into the benefits those companies are touting. Amazon has even released its own blueprint and code repository to build a $60 version of its Echo using Raspberry Pi, another piece of open-source hardware.

 

From DSC:
Perhaps this type of endeavor could find its way into some project-based learning out there, as well as in:

  • Some Computer Science-related courses
  • Some Engineering-related courses
  • User Experience Design bootcamps
  • Makerspaces
  • Programs targeted at gifted students
  • Other…??

 

 

 

Google-io-2016

 

9 most important things from the Google I/O keynote — from androidcentral.com by Jen Karner

Excerpt:
Here’s a breakdown of the nine big things Google brought to I/O 2016.

  1. Now on Steroids — Google Assistant
  2. Google Home — Amazon Who?
  3. Allo — A smarter messenger
  4. Duo — Standalone video chat
  5. Everything Android N
  6. Android Wear 2.0
  7. The future — Android Instant Apps
  8. New Android Studio
  9. New Firebase tools

 

CEO Sundar Pichai comes in at the 14:40 mark:

 

 

I/O: Building the next evolution of Google — from googleblog.blogspot.com

Excerpts:

Which is why we’re pleased to introduce…the Google assistant. The assistant is conversational—an ongoing two-way dialogue between you and Google that understands your world and helps you get things done. It makes it easy to buy movie tickets while on the go, to find that perfect restaurant for your family to grab a quick bite before the movie starts, and then help you navigate to the theater. It’s a Google for you, by you.

Google Home is a voice-activated product that brings the Google assistant to any room in your house. It lets you enjoy entertainment, manage everyday tasks, and get answers from Google—all using conversational speech. With a simple voice command, you can ask Google Home to play a song, set a timer for the oven, check your flight, or turn on your lights. It’s designed to fit your home with customizable bases in different colors and materials. Google Home will be released later this year.

 

 

 

Google takes a new approach to native apps with Instant Apps for Android — from techcrunch.com by Frederic Lardinois, Sarah Perez

Excerpt:

Mobile apps often provide a better user experience than browser-based web apps, but you first have to find them, download them, and then try not to forget you installed them. Now, Google wants us to rethink what mobile apps are and how we interact with them.

Instant Apps, a new Android feature Google announced at its I/O developer conference today but plans to roll out very slowly, wants to bridge this gap between mobile apps and web apps by allowing you to use native apps almost instantly — even when you haven’t previously installed them — simply by tapping on a URL.

 

 

Google isn’t launching a standalone VR headset…yet — from uploadvr.com

Excerpt:

To the disappointment of many, Google Vice President of Virtual Reality Clay Bavor did not announce the much-rumoured (and now discredited) standalone VR HMD at today’s Google I/O keynote.

Instead, the company announced a new platform for VR on the upcoming Android N to live on called Daydream. Much like Google’s pre-existing philosophy of creating specs and then pushing the job of building hardware to other manufacturers, the group is providing the boundaries for the initial public push of VR on Android, and letting third-parties build the phones for it.

.

 

 

Google’s Android VR Platform is Called ‘Daydream’ and Comes with a Controller — from vrguru.com by Constantin Sumanariu

Excerpt:

Speaking at the opening keynote for this week’s Google I/O developer conference, the company’s head of VR Clay Bavor announced that the latest version of Android, the unnamed Android N, would be getting a VR mode. Google calls the initiative to get the Android ecosystem ready for VR ‘Daydream’, and it sounds like a massive extension of the groundwork laid by Google Cardboard.

 

 

Conversational AI device: Google Home — from postscapes.com

Excerpt:

Google finally has its answer to Amazon’s voice-activated personal assistant device, Echo. It’s called Google Home, and it was announced today at the I/O developer conference.

 

 

Movies, TV Shows and More Comes to Daydream VR Platform — from vrguru.com by Constantin Sumanariu

 

 

 

 

Allo is Google’s new, insanely smart messaging app that learns over time — from androidcentral.com by Jared DiPane

Excerpt:

Google has announced a new smart messaging app, Allo. The app is based on your phone number, and it will continue to learn from you over time, making it smarter each day. In addition to this, you can add more emotion to your messages, in ways that you couldn’t before. You will be able to “whisper” or “shout” your message, and the font size will change depending on which you select. This is accomplished by pressing the send button and dragging up or down to change the level of emotion.

 

 

 

Google follows Facebook into chatbots — from marketwatch.com by Jennifer Booton
Google’s new home assistant and messenger service will be powered by AI

Excerpt:

Like Facebook’s bots, the Google assistant is designed to be conversational. It will play on the company’s investment in natural language processing, talking to users in a dialogue format that feels like normal conversation, and helping users buy movie tickets, make dinner reservations and get directions. The announcement comes one month after Facebook CEO Mark Zuckerberg introduced Messenger with chatbots, which serves basically the same function.

 

 

Also see:

 

 

Creators of Siri reveal first public demo of AI assistant “Viv” — from seriouswonder.com by B.J. Murphy

Excerpts:

When it comes to AI assistants, a battle has been waged between different companies, with assistants like Siri, Cortana, and Alexa at the forefront of the battle. And now a new potential competitor enters the arena.

During a 20 minute onstage demo at Disrupt NYC, creators of Siri Dag Kittlaus and Adam Cheyer revealed Viv – a new AI assistant that makes Siri look like a children’s toy.

 

“Viv is an artificial intelligence platform that enables developers to distribute their products through an intelligent, conversational interface. It’s the simplest way for the world to interact with devices, services and things everywhere. Viv is taught by the world, knows more than it is taught, and learns every day.”

 

VIV-2-May2016

 

 

From DSC:
I saw a posting at TechCrunch.com the other day — The Information Age is over; welcome to the Experience Age.  In terms of why I’m mentioning that article here, the content of that article is not what’s as relevant here as the title of the article.  An interesting concept…and probably spot on; with ramifications for numerous types of positions, skillsets, and industries from all over the globe.

Also see:

 

VIV-May2016

 

 

Addendum on 5/12/16:

  • New Siri sibling Viv may be next step in A.I. evolution — from computerworld.com by Sharon Gaudin
    Excerpt:
    With the creators of Siri offering up a new personal assistant that won’t just tell you what pizza is but can order one for you, artificial intelligence is showing a huge leap forward. Viv is an artificial intelligence (AI) platform built by Dag Kittlaus and Adam Cheyer, the creators of the AI behind Apple’s Siri, the most well-known digital assistant in the world. Siri is known for answering questions, like how old Harrison Ford is, and reminding you to buy milk on the way home. Viv, though, promises to go well past that.

 

 

Per Jack Du Mez at Calvin College, use this app to randomly call on your students — while instilling a game-like environment into your active learning classroom (ALC)!

 

Randomly-App-May2016

Description:
Randomly is an app made specifically for teachers and professors. It allows educators to enter their students into individual classes. They can then use the Random Name Selector feature to randomly call on a student to answer a question by one of two ways: Truly random, where repeated names are allowed, or a one pass – where all students are called once before they are called again. The device you’re using will even call out (vocally) the student’s name for you!

This app can also be used to randomly generate groups for you. You can split your class into groups by number of groups or by number of students per group. It intelligently knows what to do with any remaining students too!

This app supports Apple Watch, so you can call on your students with the use of your Apple Watch!

 

From DSC:
In the future, given facial and voice recognition software, I could see an Augmented Reality (AR)-based application whereby a faculty member or a teacher could see icons hovering over the students — letting the faculty member/teacher know who has been called upon recently and who hasn’t been called upon recently (with settings to change the length of time for this type of tracking — i.e., this student has been called upon in this class session, or in the last week, or in the last month, etc.).

 

AR-based-call-on-me-DanielChristian-5-10-16

 

 

 

 

Thinking about the future of work to make better decisions about learning today — from er.edcause.edu by Marina Gorbis
By looking at historical patterns and identifying signals of change around us today, we can better prepare for the transformations occurring in both work and learning.

Excerpt:

Instead of debating whether learning is for learning’s sake or as a means for earning a living, we need to think about the forces and signals of transformation and what they mean for higher education today and tomorrow.

So let’s explore these deeper transformations.1 From our experience of doing forecasting work for nearly fifty years, we at the IFTF believe that it is usually not one technology or one trend that drives transformative shifts. Rather, a cluster of interrelated technologies, often acting in concert with demographic and cultural changes, is responsible for dramatic changes and disruptions. Technologies coevolve with society and cultural norms—or as Marshall McLuhan is often quoted as having said: “We shape our tools and afterwards our tools shape us.” Nowhere does this apply more critically today than in the world of work and labor. Here, I focus on four clusters of technologies that are particularly important in shaping the changes in the world of work and learning: smart machines; coordination economies; immersive collaboration; and the maker mindset.

 

From DSC:
I appreciate this article — thanks Marina.

Marina’s article — and the work of The Institute for the Future (IFTF) — illustrates how important is it to examine the current and developing future landscapes — trying to ascertain the trends and potential transformations underway.  Such a practice is becoming increasingly relevant and important.

Why?

Because we’re now traveling at exponential rates, not linear rates.

 

SparksAndHoney-ExpVsLinear2013

 

We’re zooming down the highway at 180mph — so our gaze needs to be on the horizons — not on the hoods of our cars.

 

The pace has changed significantly and quickly

 

Institutions of higher education, boot camps, badging organizations, etc. need to start offering more courses and streams of content regarding futurism — and teaching people how to look up.

Not only is this type of perspective/practice helpful for organizations, but it’s becoming increasingly key for us as individuals.

You don’t want to be the person who gets tapped on the shoulder and is told, “I’m sorry…but your services won’t be necessary here anymore. Please join me in the conference room down the hall.”  You then walk down the hall, and as you approach the conference room, you notice that newly placed cardboard is covering the glass — and no one can see into the conference room anymore. You walk in, they shut the door, give you your last pay check and your “pink slip” (so to speak).  Then they give you 5 minutes to gather your belongings.  A security escort walks you to the front door.

Game over.

Pulse checking a variety of landscapes can contribute
towards keeping your bread and butter on the table.

 

 

Also see:

  • Credentials reform: How technology and the changing needs of the workforce will create the higher education system of the future — from er.educause.edu by Jamie Merisotis
    The shift in postsecondary credentialing and the needs of the 21st-century workforce will revolutionize higher education. Colleges and universities have vast potential to be positive agents of this change.
    .
  • New workers, new skills — from er.edcause.edu by Marina Gorbis
    What are the most important skills—the work skills and the life skills—that students should acquire from their educational experience, and what is the best way to teach those skills?Excerpt:
    We found that the following short list of skills not only continues to be relevant but also is even more important as meta-skills in the changing worlds of work:
  • Sense-making: the ability to determine the deeper meaning or significance of what is being expressed
  • Social intelligence: the ability to connect to others in a deep and direct way and to sense and stimulate reactions and desired interactions
  • Novel and adaptive thinking: a proficiency in coming up with solutions and responses beyond those that are rote or rule-based
  • Cross-cultural competency: the ability to operate in different cultural settings, not just geographical but also those that require an adaptability to changing circumstances and an ability to sense and respond to new contexts
  • Computational thinking: the ability to translate vast amounts of data into abstract concepts and to understand data-based reasoning
  • Media literacy: the ability to critically assess and develop content that uses new media forms and to leverage these media forms for persuasive communication
  • Transdisciplinarity: a literacy in, and the ability to understand, concepts across multiple disciplines
  • Design mindset: the ability to represent and develop tasks and work processes for desired outcomes
  • Cognitive load management: the ability to discern and filter data for importance and to understand how to maximize cognitive functioning using a variety of tools and techniques
  • Virtual collaboration: the ability to work productively, drive engagement, and demonstrate presence as a member of a virtual team

While we believe that these ten skills continue to be important, two additional skills have emerged from our ethnographic interviews for these new worker categories: networking IQ and hustle.

 

Thinking about the future is like taking a jog: we can always find something to do instead, but we will be better off later if we take time to do it.

 

 
© 2025 | Daniel Christian