From DSC: Recently, my neighbor graciously gave us his old Honda snowblower, as he was getting a new one. He wondered if we had a use for it. As I’m definitely not getting any younger and I’m not Howard Hughes, I said, “Sure thing! That would be great — it would save my back big time! Thank you!” (Though the image below is not mine, it might as well be…as both are quite old now.)
Anyway…when I recently ran out of gas, I would have loved to be able to take out my iPhone, hold it up to this particular Honda snowblower and ask an app to tell me if this particular Honda snowblower takes a mixture of gas and oil, or does it have a separate container for the oil? (It wasn’t immediately clear where to put the oil in, so I’m figuring it’s a mix.)
But what I would have liked to have happen was:
I launched an app on my iPhone that featured machine learning-based capabilities
The app would have scanned the snowblower and identified which make/model it was and proceeded to tell me whether it needed a gas/oil mix (or not)
If there was a separate place to pour in the oil, the app would have asked me if I wanted to learn how to put oil in the snowblower. Upon me saying yes, it would then have proceeded to display an augmented reality-based training video — showing me where the oil was to be put in and what type of oil to use (links to local providers would also come in handy…offering nice revenue streams for advertisers and suppliers alike).
So several technologies would have to be involved here…but those techs are already here. We just need to pull them together in order to provide this type of useful functionality!
From DSC: When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?
What does it mean for:
Students / learners
Faculty members
Teachers
Trainers
Instructional Designers
Interaction Designers
User Experience Designers
Curriculum Developers
…and others?
Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
Excerpt:
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
From DSC: How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?
That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:
A credentials database on LinkedIn (via blockchain) and/or
A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
and/or
To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)
Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?
Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?
Will bots and/or human tutors be instantly accessible from our couches?
Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.
In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.
Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.
…
But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.
Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”
Graphically speaking:
“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”
One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.
Prediction 1: Mixed reality rooms will begin to replace home theater. As Eric Johnson summed up last year in Recode, “to borrow an example from Microsoft’s presentation at the gaming trade show E3, you might be looking at an ordinary table, but see an interactive virtual world from the video game Minecraft sitting on top of it. As you walk around, the virtual landscape holds its position, and when you lean in close, it gets closer in the way a real object would.” Can a “holographic” cinema experience really be that far off? With 3D sound? And Smell-O-Vision?
Prediction 13. Full-wall video with multiscreens will appear in the home. Here’s something interesting: The first three predictions in this set of 10 all have an origin in commercial applications. This one — think of it more as digital signage than sports bar — will allow the user to have access to a wall that includes a weather app, a Twitter feed, a Facebook page, the latest episode of Chopped, a Cubs game, and literally anything else a member — or members — of the family are interested in. The unintended consequences: some 13-year-old will one day actually utter the phrase, “MOM! Can you minimize your Snapchat already!?!”
Prediction 22: Intelligent glass will be used as a control interface, entertainment platform, comfort control, and communication screen. Gordon van Zuiden says, “We live in a world of touch, glass-based icons. Obviously the phone is the preeminent example — what if all the glass that’s around you in the house could have some level of projection so that shower doors, windows, and mirrors could be practical interfaces?” Extend that smart concept to surfaces that don’t just respond to touch, but to gesture and voice — and now extend that to surfaces outside the home.
Prediction 28: User-programmable platforms based on interoperable systems will be the new control and integration paradigm. YOU: “Alexa, please find Casablanca on Apple TV and send it to my Android phone. And order up a pizza.”
Prediction 34. Consumer sensors will increase in sensitivity and function. The Internet of Things will become a lot like Santa: “IoT sees you when you’re sleeping/IoT knows when you’re awake/IoT knows if you’ve been bad or good…”
Prediction 47. Policy and technology will drive the security concerns over internet and voice connected devices. “When you add the complexity of ‘always on, always listening’ connected devices … keeping the consumer’s best interests in mind might not always be top of mind for corporations [producing these devices],” notes Maniscalco. “[A corporation’s] interest is usually in profits.” Maniscalco believes that a consumer push for legislation on the dissemination of the information a company can collect will be the “spark that ignites true security and privacy for the consumer.”
Prediction 53. The flexible use of the light socket: Lighting becomes more than lighting. Think about the amount of coverage — powered coverage — that the footprint of a home’s network of light sockets provides. Mike Maniscalco of Ihiji has: “You can use that coverage and power to do really interesting things, like integrate sensors into the lighting. Track humidity, people’s movements, change patterns based on what’s happening in that room.”
Prediction 64. Voice and face recognition and authentication services become more ubiquitous. Yes, your front door will recognize your face — other people’s, too. “Joe Smith comes to your door, you get a text message without having to capture video, so that’s a convenience,” notes Jacobson.
From DSC: The pace of technological development is moving extremely fast; the ethical, legal, and moral questions are trailing behind it (as is normally the case). But this exponential pace continues to bring some questions, concerns, and thoughts to my mind. For example:
What kind of future do we want?
Just because we can, should we?
Who is going to be able to weigh in on the future direction of some of these developments?
If we follow the trajectories of some of these pathways, where will these trajectories take us? For example, if many people are out of work, how are they going to purchase the products and services that the robots are building?
These and other questions arise when you look at the articles below.
This is the 8th part of a series of postings regarding this matter.
The other postings are in the Ethics section.
What would your ideal robot be like? One that can change nappies and tell bedtime stories to your child? Perhaps you’d prefer a butler that can polish silver and mix the perfect cocktail? Or maybe you’d prefer a companion that just happened to be a robot? Certainly, some see robots as a hypothetical future replacement for human carers. But a question roboticists are asking is: how human should these future robot companions be?
A companion robot is one that is capable of providing useful assistance in a socially acceptable manner. This means that a robot companion’s first goal is to assist humans. Robot companions are mainly developed to help people with special needs such as older people, autistic children or the disabled. They usually aim to help in a specific environment: a house, a care home or a hospital.
The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.
It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.
Algorithms for task management
Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of “temporal Tetris” with the parcels and packs them to optimize for space and for the planned delivery route–packages that are delivered first are towards the front, packages for the end of the route are placed at the back.
The Enterprisers Project (TEP): Machines are genderless, have no race, and are in and of themselves free of bias. How does bias creep in?
Sharp: To understand how bias creeps in you first need to understand the difference between programming in the traditional sense and machine learning. With programming in the traditional sense, a programmer analyses a problem and comes up with an algorithm to solve it (basically an explicit sequence of rules and steps). The algorithm is then coded up, and the computer executes the programmer’s defined rules accordingly.
With machine learning, it’s a bit different. Programmers don’t solve a problem directly by analyzing it and coming up with their rules. Instead, they just give the computer access to an extensive real-world dataset related to the problem they want to solve. The computer then figures out how best to solve the problem by itself.
In his latest book ‘Technology vs. Humanity’, futurist Gerd Leonhard once again breaks new ground by bringing together mankind’s urge to upgrade and automate everything (including human biology itself) with our timeless quest for freedom and happiness.
Before it’s too late, we must stop and ask the big questions:How do we embrace technology without becoming it? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth.
Digital transformation has migrated from the mainframe to the desktop to the laptop to the smartphone, wearables and brain-computer interfaces. Before it moves to the implant and the ingestible insert, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange.
Technological innovation in fields from genetic engineering to cyberwarfare is accelerating at a breakneck pace, but ethical deliberation over its implications has lagged behind. Thus argues Sheila Jasanoff — who works at the nexus of science, law and policy — in The Ethics of Invention, her fresh investigation. Not only are our deliberative institutions inadequate to the task of oversight, she contends, but we fail to recognize the full ethical dimensions of technology policy. She prescribes a fundamental reboot.
Ethics in innovation has been given short shrift, Jasanoff says, owing in part to technological determinism, a semi-conscious belief that innovation is intrinsically good and that the frontiers of technology should be pushed as far as possible. This view has been bolstered by the fact that many technological advances have yielded financial profit in the short term, even if, like the ozone-depleting chlorofluorocarbons once used as refrigerants, they have proved problematic or ruinous in the longer term.
Machine learning Of prediction and policy — from economist.com Governments have much to gain from applying algorithms to public policy, but controversies loom
Excerpt:
FOR frazzled teachers struggling to decide what to watch on an evening off (DC insert: a rare event indeed), help is at hand. An online streaming service’s software predicts what they might enjoy, based on the past choices of similar people. When those same teachers try to work out which children are most at risk of dropping out of school, they get no such aid. But, as Sendhil Mullainathan of Harvard University notes, these types of problem are alike. They require predictions based, implicitly or explicitly, on lots of data. Many areas of policy, he suggests, could do with a dose of machine learning.
Machine-learning systems excel at prediction. A common approach is to train a system by showing it a vast quantity of data on, say, students and their achievements. The software chews through the examples and learns which characteristics are most helpful in predicting whether a student will drop out. Once trained, it can study a different group and accurately pick those at risk. By helping to allocate scarce public funds more accurately, machine learning could save governments significant sums. According to Stephen Goldsmith, a professor at Harvard and a former mayor of Indianapolis, it could also transform almost every sector of public policy.
…
But the case for code is not always clear-cut. Many American judges are given “risk assessments”, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites. (Northpointe, the algorithm provider, disputes the finding.)
Who will own the robots?— from technologyreview.com by David Rotman We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates
Everyone is waiting for the Internet of Things. The funny thing is, it is already here. Contrary to expectation, though, it isn’t just a bunch of devices that have a chip and an internet connection.
The killer app of the Internet of Things isn’t a thing at all—it is services. And they are being delivered by an unlikely cast of characters: Uber Technologies Inc., SolarCity Corp. , ADT Corp., and Comcast Corp. , to name a few. One recent entrant: the Brita unit of Clorox Corp. , which just introduced a Wi-Fi-enabled “smart” pitcher that can re-order its own water filters.
When internet-connected devices are considered a service, consumers don’t have to worry about integrating gadgets. Focusing on services also helps vendors clarify their offerings.
How does the combination of smarts, sensors and connectivity enhance people’s lives?
From DSC: I inserted a [could] in the title, as I don’t think we’re there yet. That said, I don’t see chatbots, personal assistants, and the use of AI going away any time soon. This should be on our radars from here on out. Chatbots could easily be assigned some heavy lifting duties within K-20 education as well as in the corporate world; but even then, we’ll still need excellent teachers, professors, and trainers/subject matter experts out there. I don’t see anyone being replaced at this point.
Excerpt:
As the equity gap in American education continues, Microsoft co-founder Bill Gates has been urging educators, investors and tech companies to be more open in investing time and money in artificial intelligence-driven education technology programs. The reason? Gates believed that these AI-based EdTech platforms could personalize and revolutionize school learning experience while eliminating the equity gap.
The Motivation, Revision and Announcement bots each perform respective functions that are intended to help students master exams.
The Motivation bot, for instance, “keeps students motivated with reminders, social support, and other means,” while the Revision bot “helps students to best understand ways to improve their work” and the Announcement bot “tells students how much studying they need to do based on the amount of time available.”
Machine learning is best defined as the transition from feeding the computer with programs containing specific instructions in the forms of step-by-step rules or algorithms to feeding the computer with algorithms that can “learn” from data and can make inferences “on their own.” The computer is “trained” by data which is labeled or classified based on previous outcomes, and its software algorithms “learn” how to predict the classification of new data that is not labeled or classified. For example, after a period of training in which the computer is presented with spam and non-spam email messages, a good machine learning program will successfully identify, (i.e., predict,) which email message is spam and which is not without human intervention. In addition to spam filtering, machine learning has been applied successfully to problems such as hand-writing recognition, machine translation, fraud detection, and product recommendations.
I believe we are moving into the fourth era of personal computing. The first era was characterized by the emergence of the PC. The second by the web and the browser, and the third by mobile and apps.
The fourth personal computing platform will be a combination of IOT, wearable and AR-based clients using speech and gesture, connected over 4G/5G networks to PA, CaaS and social networking platforms that draw upon a new class of cloud-based AI to deliver highly personalized access to information and services.
So what does the fourth era of personal computing look like? It’s a world of smart objects, smart spaces, voice control, augmented reality, and artificial intelligence.
From DSC: How much longer before the functionalities that are found in tools like Bluescape & Mural are available via tvOS-based devices? Entrepreneurs and VCs out there, take note. Given:
the growth of freelancing and people working from home and/or out on the road
the need for people to collaborate over a distance
the growth of online learning
the growth of active/collaborative learning spaces in K-12 and higher ed
the need for lifelong learning
…this could be a lucrative market. Also, it would be meaningful work…knowing that you are helping people learn and earn.
It seems even extinction doesn’t stop you from being on Facebook Messenger.
National Geographic Kids is the latest publisher to try out chatbots on the platform.Tina the T-Rex, naturally, is using Messenger to teach kids about dinosaurs over the summer break, despite a critical lack of opposable thumbs.
…
Despite being 65 million years old, Tina is pretty limited in her bot capabilities; she can answer from a pre-programmed script, devised by tech company Rehab Studio and tweaked by Chandler, on things like dinosaur diet and way of life.
The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
‘New Dimensions in Testimony’ is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.
Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing fromConscience Display,viewers were able to ask Gutter’s holographic image questions that triggered relevant responses.
Just as the world’s precious artworks and monuments need a touch-up to look their best, the home we’ve built to host the world’s cultural treasures online needs a lick of paint every now and then. We’re ready to pull off the dust sheets and introduce the new Google Arts & Culture website and app, by the Google Cultural Institute. The app lets you explore anything from cats in art since 200 BCE to the color red in Abstract Expressionism, and everything in between. Our new tools will help you discover works and artifacts, allowing you to immerse yourself in cultural experiences across art, history and wonders of the world—from more than a thousand museums across 70 countries…
From DSC:
I read the article mentioned below. It made me wonder how 3 of the 4 main highlights that Fred mentioned (that are coming to Siri with tvOS 10) might impact education/training/learning-related applications and offerings made possible via tvOS & Apple TV:
Live broadcasts
Topic-based searches
The ability to search YouTube via Siri
The article prompted me to wonder:
Will educators and trainers be able to offer live lectures and training (globally) that can be recorded and later searched via Siri?
What if second screen devices could help learners collaborate and participate in active learning while watching what’s being presented on the main display/”TV?”
What if learning taken this way could be recorded on one’s web-based profile, a profile that is based upon blockchain-based technologies and maintained via appropriate/proven organizations of learning? (A profile that’s optionally made available to services from Microsoft/LinkedIn.com/Lynda.com and/or to a service based upon IBM’s Watson, and/or to some other online-based marketplace/exchange for matching open jobs to potential employees.)
Or what if you could earn a badge or prove a competency via this manner?
Hmmm…things could get very interesting…and very powerful.
More choice. More control. Over one’s entire lifetime.
The forthcoming update to Apple TV continues to bring fresh surprises for owners of Apple’s set top box. Many improvements are coming to tvOS 10, including single-sign-on support and an upgrade to Siri’s capabilities. Siri has already opened new doors thanks to the bundled Siri Remote, which simplifies many functions on the Apple TV interface. Four main highlights are coming to Siri with tvOS 10, which is expected to launch this fall.
CBS today announced the launch of an all-new Apple TV app that will center around the network’s always-on, 24-hour “CBSN” streaming network and has been designed exclusively for tvOS. In addition to the live stream of CBSN, the app curates news stories and video playlists for each user based on previously watched videos.
The new app will also take advantage of the 4th generation Apple TV’s deep Siri integration, allowing users to tell Apple’s personal assistant that they want to “Watch CBS News” to immediately start a full-screen broadcast of CBSN. While the stream is playing, users can interact with other parts of the app to browse related videos, bookmark some to watch later, and begin subscribing to specific playlists and topics.