The next battleground: The 4th Era of Personal Computing — from stevebrownfuturist.com by Steve Brown

Excerpt:

I believe we are moving into the fourth era of personal computing. The first era was characterized by the emergence of the PC. The second by the web and the browser, and the third by mobile and apps.

 

The fourth personal computing platform will be a combination of IOT, wearable and AR-based clients using speech and gesture, connected over 4G/5G networks to PA, CaaS and social networking platforms that draw upon a new class of cloud-based AI to deliver highly personalized access to information and services.

 

 

So what does the fourth era of personal computing look like? It’s a world of smart objects, smart spaces, voice control, augmented reality, and artificial intelligence.

 

 

 

 

IBM made a ‘crash course’ for the White House, and it’ll teach you all the AI basics — from futurism.com by Ramon Perez

Summary:

With the current AI revolution, comes a flock of skeptics. Alarmed of what AI could be in the near future, the White House released a Notice of Request For Information (RFI) on it. In response, IBM has created what seems to be an AI 101, giving a good sense of the current state, future, and risks of AI.

 

 

Also see:

 

FedGovt-Request4Info-June2016

 

 

 

A call to arms against the hacker hordes — from sloanreview.mit.edu by Theodore Kinni

Excerpt (emphasis DSC):

Attribution and retribution in the fight against cybercrime: Imagine being enthroned at the end of the long table in the C-suite. You’ve got riches beyond imagination at your disposal; tens of thousands of vassals are toiling day and night for you. Your knights surround you, awaiting your command. And, at this very moment, some evil-minded jester with a computer and an Internet connection is breaching the castle walls.

But wait, is that a war horn you hear in the distance? Yes, it’s the lawyers from Steptoe & Johnson riding to your rescue. Enough, says partner Stewart Baker and trusty clerk Victoria Muth in an article for Brink. “It’s pretty clear that building higher walls around our networks is a dead end. So is tighter scrutiny and control over what happens on the network,” they write. “Government is failing us…, too.” The solution? Fight back.

Attribution and retribution are the weapons in this counterattack. “It might mean building ‘beacons’ into documents so that when they are opened by attackers, they phone home to alert defenders that their information was compromised,” suggest Baker and Muth. “It might mean using information provided by beacons to compromise the attackers’ network and gather evidence as to the attackers’ identities. It might mean stopping a DDOS attack by taking over the botnet, or by patching the vulnerability by which the botnet conscripted third-party machines.”

 

 

Also see:

Machine Learning – New Weapon in the Hacking Wars?  — from by Ed Featherston
@CloudExpo #API #Cloud #BigData #MachineLearning

Excerpt:

It feels like the barbarians are continually at the gate. We can’t seem to go more than a week before a new data breach is in the news, impacting potentially millions of individuals. The targets range from companies like Omni Hotels, which had been breached affecting up to 50,000 customers whose personal and credit card information was exposed, to North Carolina State University, where over 38,000 students’ personal information, including their SSNs, were at risk. As I mentioned in a recent blog ‘Internet of Things and Big Data – who owns your data?‘, we have been storing our personal and credit card information in a variety of systems, credit card companies, banks, online retailers, hotels – and that’s just naming a few. The information in those systems is more valuable than gold to the hackers. The hacker attacks are constant, creative, and changing frequently.

 

 

IBM is training Watson to hunt hackers — from washingtonpost.com by Andrea Peterson

Excerpt:

Watson, IBM’s computer brain, has a lot of talents. It mastered “Jeopardy!,” it cooks, and even tries to cure cancer. But now, it’s training for a new challenge: Hunting hackers.

On [May10th, 2016], IBM Security announced a new cloud-based version of the cognitive technology, dubbed “Watson for Cybersecurity.” In the fall, IBM will be partnering with eight universities to help get Watson up to speed by flooding it with security reports and data.

 

 

 

From DSC:
I try never to judge anyone, as I don’t want to be judged (Matthew 7:1).  I try to extend grace, as I, myself, have nothing to stand on.

That said, I struggle with how to deal with and view hackers.  Daily, they wreak havoc on institutions and individuals throughout the globe — causing billions of dollars of damage.

I’m amazed at the lack of punishment dealt out to hackers. Our governments don’t step in, likely because they are all trying to hack each others’ systems as well.

But the individual and group-based hackers out there have created an underground economy…where one wakes up and goes to the office and hacks away, all for making some coin — just like a normal job evidently.  These hackers have smarts, know-how, and intelligence — but they have chosen to put it towards destructive purposes.  And there doesn’t seem to be any fear involved in doing so. 

Well, that needs to stop! There needs to be major punishment for those who hack.

That’s why the articles above caught my eye. We need to fight back against the hackers. We need to release serious damage to their systems, networks, hardware and software — just as they do to ours.

I don’t like to take this stance. I don’t like to even use the words “fight back.” But there is warfare going on — and fear needs to enter the equation for those who would resort to hacking.

BTW, I’m even nervous about posting this item…as some hacker could come after my site. If so, I hope to be back up and running again soon. But if not…yet another one bites the dust.

 

 

Infographic: IoT and the classroom of tomorrow — from cr80news.com by Andrew Hudson
Student IDs among list of most used smart devices on campus

Excerpt:

The classroom of tomorrow will undoubtedly employ more and more smart devices, and coupled with the Internet of Things (IoT) phenomenon, the way in which students learn could be very different in the not-so-distant future.

A new survey conducted by Extreme Networks reveals that while smart classrooms and schools only represent a small fraction of campuses today, the promise is there for the technology to redefine the academic experience going forward. There are K-12 schools and universities across the country that are already using the IoT to connect smart devices that can “talk” to one another for the purpose of enhancing the learning experience.

From DSC:
I look forward to the time when machine-to-machine communications and sensors will give faculty members the settings that they want setup/initiated as soon as they walk into a room (some of this is most likely already occurring somewhere else…just not on our campus yet!):

  • The front lights lower down 50% (as the professor had requested previously)
  • The front 80′ LCD — a smart/Internet connected device — display is turned on and brings up that specific course on the screen (having already signed into the cloud-based CMS/LMS upon that professor entering the room; the system has already queried the appropriate back end system to ascertain what that professor teaches at that particular time and place)
  • The window treatments are lowered all the way down for better viewing
  • The speakers play a previously scheduled song, or a spoken poem, or an announcement, or what the students should be doing for the first 5-10 minutes of class
  • Etc.

Also:

  • Attendance is automatic (this clearly is already here today and has been for a while).
  • Students could receive any handouts that the professor wanted to wait to deliver until that particular date and time — again, automatically
  • Students could upload content that they created — automatically to an electronic parking lot, for the professor or other students to review and comment on

Also see the infographic, a portion of which is seen below:

Benefits-of-IoT-Aug2016

 

Take a step inside the classroom of tomorrow — from techradar.com by Nicholas Fearn
Making learning fun

 

 

Excerpt:

But the classroom of tomorrow will look very different. The latest advancements in technology and innovation are paving the way for an educational space that’s interactive, engaging and fun.

The conventions of learning are changing. It’s becoming normal for youngsters to use games like Minecraft to develop skills such as team working and problem solving, and for teachers to turn to artificial intelligence to get a better understanding of how their pupils are progressing in lessons.

Virtual reality is also introducing new possibilities in the classroom. Gone are the days of imagining what an Ancient Egyptian tomb might look like – now you can just strap on a headset and transport yourself there in a heartbeat.

The potential for using VR to teach history, geography and other subjects is incredible when you really think about it – and it’s not the only tech that’s going to shake things up.

Artificial intelligence is already doing groundbreaking things in areas like robotics, computer science, neuroscience and linguistics, but now they’re now entering the world of education too.

London-based edtech firm Digital Assess has been working on an AI app that has the potential to revolutionise the way youngsters learn.

With the backing of the UK Government, the company has been trialing its web-based application Formative Assess in schools in England.

Using semantic indexing and natural language processing in a similar way to social networking sites, an on-screen avatar – which can be a rubber duck or robot – quizzes students on their knowledge and provides them with individual feedback on their work.

 

 

 

Uploaded on Jul 21, 2016

 

Description:
A new wave of compute technology -fueled by; big data analytics, the internet of things, augmented reality and so on- will change the way we live and work to be more immersive and natural with technology in the role as partner.

 

 

Also see:

Excerpt:

We haven’t even scratched the surface of the things technology can do to further human progress.  Education is the next major frontier.  We already have PC- and smartphone-enabled students, as well as tech-enabled classrooms, but the real breakthrough will be in personalized learning.

Every educator divides his or her time between teaching and interacting.  In lectures they have to choose between teaching to the smartest kid in the class or the weakest.  Efficiency (and reality) dictates that they must teach to the theoretical median, meaning some students will be bored and some will still struggle.  What if a digital assistant could step in to personalize the learning experience for each student, accelerating the curriculum for the advanced students and providing greater extra support for those that need more help?  The digital assistant could “sense” and “learn” that Student #1 has already mastered a particular subject and “assign” more advanced materials.  And it could provide additional work to Student #2 to ensure that he or she was ready for the next subject.  Self-paced learning to supplant and support classroom learning…that’s the next big advancement.

 

 

 

 

How might these enhancements to Siri and tvOS 10 impact education/training/learning-related offerings & applications? [Christian]

From DSC:
I read the article mentioned below.  It made me wonder how 3 of the 4 main highlights that Fred mentioned (that are coming to Siri with tvOS 10) might impact education/training/learning-related applications and offerings made possible via tvOS & Apple TV:

  1. Live broadcasts
  2. Topic-based searches
  3. The ability to search YouTube via Siri

The article prompted me to wonder:

  • Will educators and trainers be able to offer live lectures and training (globally) that can be recorded and later searched via Siri? 
  • What if second screen devices could help learners collaborate and participate in active learning while watching what’s being presented on the main display/”TV?”
  • What if learning taken this way could be recorded on one’s web-based profile, a profile that is based upon blockchain-based technologies and maintained via appropriate/proven organizations of learning? (A profile that’s optionally made available to services from Microsoft/LinkedIn.com/Lynda.com and/or to a service based upon IBM’s Watson, and/or to some other online-based marketplace/exchange for matching open jobs to potential employees.)
  • Or what if you could earn a badge or prove a competency via this manner?

Hmmm…things could get very interesting…and very powerful.

More choice. More control. Over one’s entire lifetime.

Heutagogy on steroids.

Micro-learning.

Perhaps this is a piece of the future for MOOCs…

 

MoreChoiceMoreControl-DSC

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

StreamsOfContent-DSC

 

 


 

Apple TV gets new Siri features in tvOS 10 — from iphonefaq.org by Fred Straker

Excerpt:

The forthcoming update to Apple TV continues to bring fresh surprises for owners of Apple’s set top box. Many improvements are coming to tvOS 10, including single-sign-on support and an upgrade to Siri’s capabilities. Siri has already opened new doors thanks to the bundled Siri Remote, which simplifies many functions on the Apple TV interface. Four main highlights are coming to Siri with tvOS 10, which is expected to launch this fall.

 


 

Addendum on 7/17/16:

CBS News Launches New Apple TV App Designed Exclusively for tvOS — from macrumors.com

Excerpt:

CBS today announced the launch of an all-new Apple TV app that will center around the network’s always-on, 24-hour “CBSN” streaming network and has been designed exclusively for tvOS. In addition to the live stream of CBSN, the app curates news stories and video playlists for each user based on previously watched videos.

The new app will also take advantage of the 4th generation Apple TV’s deep Siri integration, allowing users to tell Apple’s personal assistant that they want to “Watch CBS News” to immediately start a full-screen broadcast of CBSN. While the stream is playing, users can interact with other parts of the app to browse related videos, bookmark some to watch later, and begin subscribing to specific playlists and topics.

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This posting represents Part VI in a series of such postings that illustrate how quickly things are moving (Part I, Part II, Part III, Part IV, Part V, and to ask:

  • How do we collectively start talking about the future that we want?
  • How do we go about creating our dreams, not our nightmares?
  • Most certainly, governments will be involved….but who else should be involved in these discussions? Shouldn’t each one of us participate in some way, shape, or form?

 

 

AIsWhiteGuyProblem-NYTimes-June2016

 

Artificial Intelligence’s White Guy Problem — from nytimes.com by Kate Crawford

Excerpt:

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.

 

 

Facebook is using artificial intelligence to categorize everything you write — from futurism.com

Excerpt:

Facebook has just revealed DeepText, a deep learning AI that will analyze everything you post or type and bring you closer to relevant content or Facebook services.

 

 

March of the machines — from economist.com
What history tells us about the future of artificial intelligence—and how society should respond

Excerpt:

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

 

As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.

 

 

Backlash-Data-DefendantsFutures-June2016

 

In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures — from nytimes.com by Mitch Smith

Excerpt:

CHICAGO — When Eric L. Loomis was sentenced for eluding the police in La Crosse, Wis., the judge told him he presented a “high risk” to the community and handed down a six-year prison term.

The judge said he had arrived at his sentencing decision in part because of Mr. Loomis’s rating on the Compas assessment, a secret algorithm used in the Wisconsin justice system to calculate the likelihood that someone will commit another crime.

Compas is an algorithm developed by a private company, Northpointe Inc., that calculates the likelihood of someone committing another crime and suggests what kind of supervision a defendant should receive in prison. The results come from a survey of the defendant and information about his or her past conduct. Compas assessments are a data-driven complement to the written presentencing reports long compiled by law enforcement agencies.

 

 

Google Tackles Challenge of How to Build an Honest Robot — from bloomberg.com by

Excerpt:

Researchers at Alphabet Inc. unit Google, along with collaborators at Stanford University, the University of California at Berkeley, and OpenAI — an artificial intelligence development company backed by Elon Musk — have some ideas about how to design robot minds that won’t lead to undesirable consequences for the people they serve. They published a technical paper Tuesday outlining their thinking.

The motivation for the research is the immense popularity of artificial intelligence, software that can learn about the world and act within it. Today’s AI systems let cars drive themselves, interpret speech spoken into phones, and devise trading strategies for the stock market. In the future, companies plan to use AI as personal assistants, first as software-based services like Apple Inc.’s Siri and the Google Assistant, and later as smart robots that can take actions for themselves.

But before giving smart machines the ability to make decisions, people need to make sure the goals of the robots are aligned with those of their human owners.

 

 

Policy paper | Data Science Ethical Framework — from gov.uk
From: Cabinet Office, Government Digital Service and The Rt Hon Matt Hancock MP
First published: 19 May 2016
Part of: Government transparency and accountability

This framework is intended to give civil servants guidance on conducting data science projects, and the confidence to innovate with data.

Detail: Data science provides huge opportunities for government. Harnessing new forms of data with increasingly powerful computer techniques increases operational efficiency, improves public services and provides insight for better policymaking. We want people in government to feel confident using data science techniques to innovate. This guidance is intended to bring together relevant laws and best practice, to give teams robust principles to work with. The publication is a first version that we are asking the public, experts, civil servants and other interested parties to help us perfect and iterate. This will include taking on evidence from a public dialogue on data science ethics. It was published on 19 May by the Minister for Cabinet Office, Matt Hancock. If you would like to help us iterate the framework, find out how to get in touch at the end of this blog.

 

 

 

WhatsNextForAI-June2016

Excerpt (emphasis DSC):

We need to update the New Deal for the 21st century and establish a trainee program for the new jobs artificial intelligence will create. We need to retrain truck drivers and office assistants to create data analysts, trip optimizers and other professionals we don’t yet know we need. It would have been impossible for an antebellum farmer to imagine his son becoming an electrician, and it’s impossible to say what new jobs AI will create. But it’s clear that drastic measures are necessary if we want to transition from an industrial society to an age of intelligent machines.

The next step in achieving human-level ai is creating intelligent—but not autonomous—machines. The AI system in your car will get you safely home, but won’t choose another destination once you’ve gone inside. From there, we’ll add basic drives, along with emotions and moral values. If we create machines that learn as well as our brains do, it’s easy to imagine them inheriting human-like qualities—and flaws.

 

 

DARPA to Build “Virtual Data Scientist” Assistants Through A.I. — from inverse.com by William Hoffman
A.I. will make up for the lack of data scientists.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.

This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.

 

 

Robot that chooses to inflict pain sparks debate about AI systems — from interestingengineering.com by Maverick Baker

Excerpt:

A robot built by roboticist Alexander Reben from the University of Berkeley, California has the ability to decide using AI whether or not to inflict pain.

The robot aims to spark a debate on if an AI system can get out of control, reminiscent of the terminator. The robot design is incredibly simple, designed to serve only one purpose; to decide whether or not to inflict pain. The robot was engineered by Alexander Reben of the University of Berkeley and was published in a scientific journal aimed to spark a debate on whether or not artificial intelligent robots can get out of hand if given the opportunity.

 

 

The NSA wants to spy on the Internet of Things. Everything from thermostats to pacemakers could be mined for intelligence data. — from engadget.com by Andrew Dalton

Excerpt:

We already know the National Security Agency is all up in our data, but the agency is reportedly looking into how it can gather even more foreign intelligence information from internet-connected devices ranging from thermostats to pacemakers. Speaking at a military technology conference in Washington D.C. on Friday, NSA deputy director Richard Ledgett said the agency is “looking at it sort of theoretically from a research point of view right now.” The Intercept reports Ledgett was quick to point out that there are easier ways to keep track of terrorists and spies than to tap into any medical devices they might have, but did confirm that it was an area of interest.

 

 

The latest tool in the NSA’s toolbox? The Internet of Things — from digitaltrends.com by Lulu Chang

Excerpt:

You may love being able to set your thermostat from your car miles before you reach your house, but be warned — the NSA probably loves it too. On Friday, the National Security Agency — you know, the federal organization known for wiretapping and listening it on U.S. citizens’ conversations — told an audience at Washington’s Newseum that it’s looking into using the Internet of Things and other connected devices to keep tabs on individuals.

 


Addendum on 6/29/16:

 

Addendums on 6/30/16

 

Addendum on 7/1/16

  • Humans are willing to trust chatbots with some of their most sensitive information — from businessinsider.com by Sam Shead
    Excerpt:
    A study has found that people are inclined to trust chatbots with sensitive information and that they are open to receiving advice from these AI services. The “Humanity in the Machine” report —published by media agency Mindshare UK on Thursday — urges brands to engage with customers through chatbots, which can be defined as artificial intelligence programmes that conduct conversations with humans through chat interfaces.

 

 

 

 

What the bot revolution could mean for online learning — from huffingtonpost.com by Daily Bits Of

Excerpt:

We’re embracing the bot revolution
With these limitations in mind, we embrace the bot movement. In short, having our bite-sized courses delivered via messaging platforms will open up a lot of new benefits for our users.

  1. The courses will become social.
  2. It will become easier to consume a course via a channel that fits best for the course.
  3. The courses will become more interactive.
  4. Bots will remove some of the friction

 

 

The future of online learning will happen via messaging services.

 

 

 

 


 

Also relevant here:

 

WatsonTrainPreSchoolers-June2016

 

 

From DSC:
By posting such items, I’m not advocating that we remove teachers, professors, trainers, coaches, etc. from the education/training equations.  Rather, I am advocating that we use technology as tools for educating and training people — and using technologies to help people of all ages grow, and reinvent themselves when necessary.  Such tools should be used to help our overworked teachers, professors, trainers, etc. of the world in delivering excellent, effective elearning experiences for our students/employees.

 

 

 

 

Specialists central to high-quality, engaging online programming [Christian]

DanielChristian-TheEvoLLLution-TeamsSpecialists-6-20-16

 

Specialists central to high-quality, engaging online programming — from EvoLLLution.com (where the LLL stands for lifelong learning) by Daniel Christian

Excerpts:

Creating high-quality online courses is getting increasingly complex—requiring an ever-growing set of skills. Faculty members can’t do it all, nor can instructional designers, nor can anyone else.  As time goes by, new entrants and alternatives to traditional institutions of higher education will likely continue to appear on the higher education landscape—the ability to compete will be key.

For example, will there be a need for the following team members in your not-too-distant future?

  • Human Computer Interaction (HCI) Specialists: those with knowledge of how to leverage Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in order to create fun and engaging learning experiences (while still meeting the learning objectives)
  • Data Scientists
  • Artificial Intelligence Integrators
  • Cognitive Computing Specialists
  • Intelligent Tutoring Developers
  • Learning Agent Developers
  • Algorithm Developers
  • Personalized Learning Specialists
  • Cloud-based Learner Profile Administrators
  • Transmedia Designers
  • Social Learning Experts

 

chatbots-wharton-june2016

The rise of the chatbots: Is it time to embrace them? — from knowledge.wharton.upenn.edu

Excerpt:

The tech world is all agog these days about chatbots. These are automated computer programs that simulate online conversations with people to answer questions or perform tasks. While chatbots have been around in various rudimentary forms for years — think of Clippy, Microsoft’s paper clip virtual assistant — they have been taking off lately as advances in machine learning and artificial intelligence make them more versatile than ever. Among the most well-known chatbots: Apple’s Siri.

In rapid succession over the past few months, Microsoft, Facebook and Google have each unveiled their chatbot strategies, touting the potential for this evolving technology to aid users and corporate America with its customer-service capabilities as well as business utility features like organizing a meeting. Yahoo joined the bandwagon recently, launching its first chatbots on a chat app called Kik Messenger.

 

 

 

Bill Gates says the next big thing in tech can help people learn like he does — from businessinsider.com by Matt Weinberger

Excerpt (emphasis DSC):

In a new interview with The Verge, Microsoft cofounder and richest man in the world Bill Gates explained the potential for chatbots programs you can text with like they’re human — in education.

Gates lauds the potential for what he calls “dialogue richness,” where an chatbot can really hold a conversation with a student, essentially making it into a tutor that can walk them through even the toughest, most subjective topics. 

It’s actually similar to how Gates himself likes to learn, he tells The Verge…

 

 

The complete beginner’s guide to chatbots — from chatbotsmagazine.com by Matt Schlicht
Everything you need to know.

Excerpt (emphasis DSC):

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

What is a chatbot?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.

Examples of Chat Bots
Weather bot. Get the weather whenever you ask.
Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
Life advice bot. I’ll tell it my problems and it helps me think of solutions.
Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.

 

 

 

Chatbots explained: Why Facebook and other tech companies think they’re the biggest thing since the iPhone — from businessinsider.com by Biz Carson

Excerpt:

Chatbots are the future, whether we’re ready for them or not.

On Tuesday (April 5, 2016) , Facebook launched Bots for Messenger, a step that could define the next decade in the same way that the Apple App Store launch paved the path for companies like Uber to build a business off your phone. Its new messaging platform will help businesses build intelligent chatbots to let them communicate in Messenger.

“Today could be the beginning of a new era,” said Facebook Messenger chief David Marcus.

So what are these chatbots, and why is everyone obsessed?

 

 

 

Facebook wants to completely revolutionize the way you talk to businesses — from businessinsider.com by Jillian D’Onfro

 

 

 

Bot wars: Why big tech companies want apps to talk back to you — from fastcompany.com by Jared Newman
Can a new wave of chatbots from Facebook and Microsoft upend apps as we know them, or is that just wishful thinking?

Excerpt:

The rise of conversational “chatbots” begins with a claim you might initially dismiss as preposterous. “Bots are the new apps,” Microsoft CEO Satya Nadella declared during the company’s Build developers conference last month. “People-to-people conversations, people-to-digital assistants, people-to-bots, and even digital assistants-to-bots. That’s the world you’re going to get to see in the years to come.”

 

 

 

Microsoft CEO Nadella: ‘Bots are the new apps’

Excerpt:

SAN FRANCISCO – Microsoft CEO Satya Nadella kicked off the company’s Build developers conference with a vision of the future filled with chatbots, machine learning and artificial intelligence.

“Bots are the new apps,” said Nadella during a nearly three-hour keynote here that sketched a vision for the way humans will interact with machines. “People-to-people conversations, people-to-digital assistants, people-to-bots and even digital assistants-to-bots. That’s the world you’re going to get to see in the years to come.”

Onstage demos hammered home those ideas. One involved a smartphone conversing with digital assistant Cortana about planning a trip to Ireland, which soon found Cortana bringing in a Westin Hotels chatbot that booked a room based on the contents of the chat.

 

 

 

 


 

Addendums on 6/17/16:

 

HolographicStorytellingJWT-June2016

HolographicStorytellingJWT-2-June2016

 

Holographic storytelling — from jwtintelligence.com by Jade Perry

Excerpt (emphasis DSC):

The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.

New Dimensions in Testimony’ is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.

Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from Conscience Display, viewers were able to ask Gutter’s holographic image questions that triggered relevant responses.

 

 

From DSC:
I wonder…is this an example of a next generation, visually-based chatbot*?

With the growth of artificial intelligence (AI), intelligent systems, and new types of human computer interaction (HCI), this type of concept could offer an on-demand learning approach that’s highly engaging — and accessible from face-to-face settings as well as from online-based learning environments. (If it could be made to take in some of the context of a particular learner and where a learner is in the relevant Zone of Proximal Development (via web-based learner profiles/data), it would be even better.)

As an aside, is this how we will obtain
customer service from the businesses of the future? See below.

 


 

 

*The complete beginner’s guide to chatbots — from chatbotsmagazine.com by Matt Schlicht
Everything you need to know.

Excerpt (emphasis DSC):

What are chatbots? Why are they such a big opportunity? How do they work? How can I build one? How can I meet other people interested in chatbots?

These are the questions we’re going to answer for you right now.

What is a chatbot?
A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface. The service could be any number of things, ranging from functional to fun, and it could live in any major chat product (Facebook Messenger, Slack, Telegram, Text Messages, etc.).

A chatbot is a service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.

Examples of chatbots
Weather bot. Get the weather whenever you ask.
Grocery bot. Help me pick out and order groceries for the week.
News bot. Ask it to tell you when ever something interesting happens.
Life advice bot. I’ll tell it my problems and it helps me think of solutions.
Personal finance bot. It helps me manage my money better.
Scheduling bot. Get me a meeting with someone on the Messenger team at Facebook.
A bot that’s your friend. In China there is a bot called Xiaoice, built by Microsoft, that over 20 million people talk to.

 

 

Will “class be in session” soon on tools like Prysm & Bluescape? If so, there will be some serious global interaction, collaboration, & participation here! [Christian]

From DSC:
Below are some questions and thoughts that are going through my mind:

  • Will “class be in session” soon on tools like Prysm & Bluescape?
  • Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
  • Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
  • Prysm is already available on mobile devices and what we consider a television continues to morph
  • Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
  • Will colleges and universities innovate into such setups?  Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
  • How will training, learning and development groups leverage these tools/technologies?
  • Are there some opportunities for homeschoolers here?

Along these lines, are are some videos/images/links for you:

 

 

PrysmVisualWorkspace-June2016

 

PrysmVisualWorkspace2-June2016

 

BlueScape-2016

 

BlueScape-2015

 

 



 

 

DSC-LyndaDotComOnAppleTV-June2016

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

Also see:

kitchenstories-AppleTV-May2016

 

 

 

 


 

Also see:

 


Prysm Adds Enterprise-Wide Collaboration with Microsoft Applications — from ravepubs.com by Gary Kayye

Excerpt:

To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.

 


 

 

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 
© 2024 | Daniel Christian