Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

whydeeplearningchangingyourlife-sept2016

 

Why deep learning is suddenly changing your life — from fortune.com by Roger Parloff

Excerpt:

Most obviously, the speech-recognition functions on our smartphones work much better than they used to. When we use a voice command to call our spouses, we reach them now. We aren’t connected to Amtrak or an angry ex.

In fact, we are increasingly interacting with our computers by just talking to them, whether it’s Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Chinese search giant Baidu says customers have tripled their use of its speech interfaces in the past 18 months.

Machine translation and other forms of language processing have also become far more convincing, with Google, Microsoft, Facebook, and Baidu unveiling new tricks every month. Google Translate now renders spoken sentences in one language into spoken sentences in another for 32 pairs of languages, while offering text translations for 103 tongues, including Cebuano, Igbo, and Zulu. Google’s Inbox app offers three ready-made replies for many incoming emails.

But what most people don’t realize is that all these breakthroughs are, in essence, the same breakthrough. They’ve all been made possible by a family of artificial intelligence (AI) techniques popularly known as deep learning, though most scientists still prefer to call them by their original academic designation: deep neural networks.

 

Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”

 

 

ai-machinelearning-deeplearning-relationship-roger-fall2016

 

 

Graphically speaking:

 

ai-machinelearning-deeplearning-relationship-fall2016

 

 

 

“Our sales teams are using neural nets to recommend which prospects to contact next or what kinds of product offerings to recommend.”

 

 

One way to think of what deep learning does is as “A to B mappings,” says Baidu’s Ng. “You can input an audio clip and output the transcript. That’s speech recognition.” As long as you have data to train the software, the possibilities are endless, he maintains. “You can input email, and the output could be: Is this spam or not?” Input loan applications, he says, and the output might be the likelihood a customer will repay it. Input usage patterns on a fleet of cars, and the output could advise where to send a car next.

 

 

 

 

From DSC:
The other day I had posted some ideas in regards to how artificial intelligence, machine learning, and augmented reality are coming together to offer some wonderful new possibilities for learning (see: “From DSC: Amazing possibilities coming together w/ augmented reality used in conjunction w/ machine learning! For example, consider these ideas.”) Here is one of the graphics from that posting:

 

horticulturalapp-danielchristian

These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).

But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight his solid reflections and ideas:

 

nikosandriotisidea-oct2016

https://www.talentlms.com/blog/author/nikos-andriotis

 

From DSC:
Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!

Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning.  What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.

How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?

Given:

  • That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
  • Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
  • 65% of today’s students will be doing jobs that don’t even exist yet (per Microsoft & The Future Laboratory in 2016)

 

futureproofyourself-msfuturelab-2016

  • The exponential pace of technological change
  • The increasing level of experimentation with blockchain (credentialing)
  • …and more

…what do the futures look like for those colleges and universities that operate only in the face-to-face space and who are not innovating enough?

 

 

 

From DSC:
Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)

The educational benefits — as well as the business/profit-related benefits will certainly be significant!

For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices. (Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)

 


Some use cases for such an app:


Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!

They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.

 

girl
Above image via shutterstock.com

 

horticulturalapp-danielchristian

 

In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc.  in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.

 

horticulturalapp2-danielchristian

 

Or let’s look at the potential uses of this type of app from some different angles.

Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have any Eastern Poison Ivy in it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivy for you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).

 

easternpoisonivy

 

 

Or consider another use of such an app:

  • A homeowner who wants to get rid of a certain kind of weed.  The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
  • Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.

 

Or consider another use of such an app:

  • A homeowner has a diseased tree, and they want to know what to do about it. The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
  • Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.

 

Or consider other/similar apps along these lines:

  • Skin ML (for detecting any issues re: acme, skin cancers, etc.)
  • Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
  • Fish ML
  • Etc.

fish-ml-gettyimages

Image from gettyimages.com

 

So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.

 


*  From Wikipedia:

Horticulture involves nine areas of study, which can be grouped into two broad sections: ornamentals and edibles:

  1. Arboriculture is the study of, and the selection, plant, care, and removal of, individual trees, shrubs, vines, and other perennial woody plants.
  2. Turf management includes all aspects of the production and maintenance of turf grass for sports, leisure use or amenity use.
  3. Floriculture includes the production and marketing of floral crops.
  4. Landscape horticulture includes the production, marketing and maintenance of landscape plants.
  5. Olericulture includes the production and marketing of vegetables.
  6. Pomology includes the production and marketing of pome fruits.
  7. Viticulture includes the production and marketing of grapes.
  8. Oenology includes all aspects of wine and winemaking.
  9. Postharvest physiology involves maintaining the quality of and preventing the spoilage of plants and animals.

 

 

 

 

accenture-futuregrowthaisept2016

accenture-futurechannelsgrowthaisept2016

 

Why Artificial Intelligence is the Future of Growth — from accenture.com

Excerpt:

Fuel For Growth
Compelling data reveal a discouraging truth about growth today. There has been a marked decline in the ability of traditional levers of production—capital investment and labor—to propel economic growth.

Yet, the numbers tell only part of the story. Artificial intelligence (AI) is a new factor of production and has the potential to introduce new sources of growth, changing how work is done and reinforcing the role of people to drive growth in business.

Accenture research on the impact of AI in 12 developed economies reveals that AI could double annual economic growth rates in 2035 by changing the nature of work and creating a new relationship between man and machine. The impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time.

 

 

Also see:

 

 

 

Amazon is winning the race to the future — from bizjournals.com by

Excerpt:

This is the week when artificially intelligent assistants start getting serious.

On Tuesday, Google is expected to announce the final details for Home, its connected speaker with the new Google Assistant built inside.

But first Amazon, which surprised everyone last year by practically inventing the AI-in-a-can platform, will release a new version of the Echo Dot, a cheaper and smaller model of the full-sized Echo that promises to put the company’s Alexa assistant in every room in your house.

The Echo Dot has all the capabilities of the original Echo, but at a much cheaper price, and with a compact form factor that’s designed to be tucked away. Because of its size (it looks like a hockey puck from the future), its sound quality isn’t as good as the Echo, but it can hook up to an external speaker through a standard audio cable or Bluetooth.

 

amazon-newdot-oct2016

 

 

100 bot people to watch #BotWatch #1 — from chatbotsmagazine.com

Excerpt:

100 people to watch in the bot space, in no order.

I’ll publish a new list once a month. This one is #1 October 2016.

This is my personal top 100 for people to watch in the bot space.

 

 

Should We Give Chatbots Their Own Personalities? — from re-work.com by Sophie Curtis

Excerpt:

Today, we have machines that assemble cars, make candy bars, defuse bombs, and a myriad of other things. They can dispense our drinks, facilitate our bank deposits, and find the movies we want to watch with a touch of the screen.

Automation allows all kinds of amazing things, but it is all done with virtually no personality. Building a chatbot with the ability to be conversational with emotion is crucial to getting people to gain trust in the technology. And now there are plenty of tools and resources available to rapidly create and launch chatbots with the personality customers want and businesses needs.

Jordi Torras is CEO and Founder of Inbenta, a company that specializes in NLP, semantic search and chatbots to improve customer experience. We spoke to him ahead of his presentation at the Virtual Assistant Summit in San Francisco, to learn about the recent explosion of chatbots and virtual assistants, and what we can expect to see in the future.

 

 

 

How I built and launched my first chatbot in hours — from chatbotsmagazine.com by Max Pelzner
From idea to MVB (Minimum Viable Bot), and launched in 24 hours!

 

 

 

Developing a Chatbot? Do Not Make These Mistakes! — from chatbotsmagazine.com Hira Saeed

 

 

 

This is what an A.I.-powered future looks like — from venturebeat.com by Grayson Brulte

Excerpt:

Today, we are just beginning to scratch the surface of what is possible with artificial intelligence (A.I.) and how individuals will interact with its various forms. Every single aspect of our society — from cars to houses to products to services — will be reimagined and redesigned to incorporate A.I.

A child born in the year 2030 will not comprehend why his or her parents once had to manually turn on the lights in the living room. In the future, the smart home will seamlessly know the needs, wants, and habits of the individuals who live in the home prior to them taking an action.

Before we arrive at this future, it is helpful to take a step back and reimagine how we design cars, houses, products, and services. We are just beginning to see glimpses of this future with the Amazon Echo and Google Home smart voice assistants.

 

 

Artificial intelligence created to fold laundry for you — from geek.com by Matthew Humphries

Excerpt:

So, Seven Dreamers Laboratories, in collaboration with Panasonic and Daiwa House Industry, have created just such a machine. However, folding laundry correctly turns out to be quite a complicated task, and so an artificial intelligence was required to make it a reliable process.

Laundry folding is actually a five stage process, including:

Grabbing
Spreading
Recognizing
Folding
Sorting/Storing

The grabbing and spreading seems pretty easy, but then the machine needs to understand what type of clothing it needs to fold. That recognizing stage requires both image recognition and AI. The image recognition classifies the type of clothing, then the AI figures out which processes to use in order to start folding.

 

 

 

 

 

 

2 days of global chatbot experts at Talkabot in 12 minutes — from chatbotsmagazine.com by Alec Lazarescu

Excerpt:

During a delightful “cold spell” in Austin at the end of September, a few hundred chatbot enthusiasts joined together for the first talkabot.ai conference.

As a participant both writing about and building chatbots, I’m excited to share a mix of valuable actionable insights and strategic vision directions picked up from speakers and attendees as well as behind the scenes discussions with the organizers from Howdy.

In a very congenial and collaborative atmosphere, a number of valuable recurring themes stood out from a variety of expert speakers ranging from chatbot builders to tool makers to luminaries from adjacent industries.

 

 

 


Addendum:


 

alexaprize-2016

The Alexa Prize (emphasis DSC)

The way humans interact with machines is at an inflection point and conversational artificial intelligence (AI) is at the center of the transformation. Alexa, the voice service that powers Amazon Echo, enables customers to interact with the world around them in a more intuitive way using only their voice.

The Alexa Prize is an annual competition for university students dedicated to accelerating the field of conversational AI. The inaugural competition is focused on creating a socialbot, a new Alexa skill that converses coherently and engagingly with humans on popular topics and news events. Participating teams will advance several areas of conversational AI including knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. Through the innovative work of students, Alexa customers will have novel, engaging conversations. And, the immediate feedback from Alexa customers will help students improve their algorithms much faster than previously possible.

Amazon will award the winning team $500,000. Additionally, a prize of $1 million will be awarded to the winning team’s university if their socialbot achieves the grand challenge of conversing coherently and engagingly with humans on popular topics for 20 minutes.

 

 

 

ngls-2017-conference

 

From DSC:
I have attended the Next Generation Learning Spaces Conference for the past two years. Both conferences were very solid and they made a significant impact on our campus, as they provided the knowledge, research, data, ideas, contacts, and the catalyst for us to move forward with building a Sandbox Classroom on campus. This new, collaborative space allows us to experiment with different pedagogies as well as technologies. As such, we’ve been able to experiment much more with active learning-based methods of teaching and learning. We’re still in Phase I of this new space, and we’re learning new things all of the time.

For the upcoming conference in February, I will be moderating a New Directions in Learning panel on the use of augmented reality (AR), virtual reality (VR), and mixed reality (MR). Time permitting, I hope that we can also address other promising, emerging technologies that are heading our way such as chatbots, personal assistants, artificial intelligence, the Internet of Things, tvOS, blockchain and more.

The goal of this quickly-moving, engaging session will be to provide a smorgasbord of ideas to generate creative, innovative, and big thinking. We need to think about how these topics, trends, and technologies relate to what our next generation learning environments might look like in the near future — and put these things on our radars if they aren’t already there.

Key takeaways for the panel discussion:

  • Reflections regarding the affordances that new developments in Human Computer Interaction (HCI) — such as AR, VR, and MR — might offer for our learning and our learning spaces (or is our concept of what constitutes a learning space about to significantly expand?)
  • An update on the state of the approaching ed tech landscape
  • Creative, new thinking: What might our next generation learning environments look like in 5-10 years?

I’m looking forward to catching up with friends, meeting new people, and to the solid learning that I know will happen at this conference. I encourage you to check out the conference and register soon to take advantage of the early bird discounts.

 

 

From chatbots to Einstein, artificial intelligence as a service — from infoworld.com by Yves de Montcheuil

Excerpt:

The recent announcement of Salesforce Einstein — dubbed “artificial intelligence for everyone” — sheds new light on the new and pervasive usage of artificial intelligence in every aspect of businesses.

 

Powered by advanced machine learning, deep learning, predictive analytics, natural language processing and smart data discovery, Einstein’s models will be automatically customized for every single customer, and it will learn, self-tune, and get smarter with every interaction and additional piece of data. Most importantly, Einstein’s intelligence will be embedded within the context of business, automatically discovering relevant insights, predicting future behavior, proactively recommending best next actions and even automating tasks.

 


Chatbots, or conversational bots, are the “other” trending topic in the field of artificial intelligence. At the juncture of consumer and business, they provide the ability for an AI-based system to interact with users through a headless interface. It does not matter whether a messaging app is used, or a speech-to-text system, or even another app — the chatbot is front-end agnostic.

Since the user does not have the ability to provide context around the discussion, he just asks questions in natural language to an AI-driven backend that is tasked with figuring this context and looking for the right answer.

 

 

IBM is launching a much-awaited ‘Watson’ recruiting tool — from eremedia.com by Todd Raphael

Excerpt:

For many months IBM has gone to recruiting-industry conferences to say that the famous Watson will be at some point used for talent-acquisition, but that it hasn’t happened quite yet.

It’s here.

IBM is first using Watson for its RPO customers, and then rolling it out as a product for the larger community, perhaps next spring. One of my IBM contacts, Recruitment Innovation Global Leader Yates Baker, tells me that the current version is a work in progress like the first iPhone (or perhaps like that Siri-for-recruiting tool).

There are three parts: recruiting, marketing, and sourcing.

 

watsonrecruitingtool-sept2016

 

 

Apple’s Siri: A Lot Smarter, but Still Kind of Dumb — from wsj.com by Joanna Stern
With the new MacOS and Apple’s AirPods, Siri’s more powerful than ever, but still not as good as some competitors

Excerpt:

With the new iOS 10, Siri can control third-party apps, like Uber and WhatsApp. With the release of MacOS Sierra on Tuesday, Siri finally lands on the desktop, where it can take care of basic operating system tasks, send emails and more. With WatchOS 3 and the new Apple Watch, Siri is finally faster on the wrist. And with Apple’s Q-tip-looking AirPods arriving in October, Siri can whisper sweet nothings in your inner ear with unprecedented wireless freedom. Think Joaquin Phoenix’s earpiece in the movie “Her.”

The groundwork is laid for an AI assistant to stake a major claim in your life, and finally save you time by doing menial tasks. But the smarter Siri becomes in some places, the dumber it seems in others—specifically compared with Google’s and Amazon’s voice assistants. If I hear “I’m sorry, Joanna, I’m afraid I can’t answer that” one more time…

 

 

 

IBM Research and MIT Collaborate to Advance Frontiers of Artificial Intelligence in Real-World Audio-Visual Comprehension Technologies — from prnewswire.com
Cross-disciplinary research approach will use insights from brain and cognitive science to advance machine understanding

Excerpt:

YORKTOWN HEIGHTS, N.Y., Sept. 20, 2016 /PRNewswire/ — IBM Research (NYSE: IBM) today announced a multi-year collaboration with the Department of Brain & Cognitive Sciences at MIT to advance the scientific field of machine vision, a core aspect of artificial intelligence. The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C) goal will be to develop cognitive computing systems that emulate the human ability to understand and integrate inputs from multiple sources of audio and visual information into a detailed computer representation of the world that can be used in a variety of computer applications in industries such as healthcare, education, and entertainment.

The BM3C will address technical challenges around both pattern recognition and prediction methods in the field of machine vision that are currently impossible for machines alone to accomplish. For instance, humans watching a short video of a real-world event can easily recognize and produce a verbal description of what happened in the clip as well as assess and predict the likelihood of a variety of subsequent events, but for a machine, this ability is currently impossible.

 

 

Satya Nadella on Microsoft’s new age of intelligence — from fastcompany.com by Harry McCracken
How the software giant aims to tie everything from Cortana to Office to HoloLens to Azure servers into one AI experience.

Excerpt:

“Microsoft was born to do a certain set of things. We’re about empowering people in organizations all over the world to achieve more. In today’s world, we want to use AI to achieve that.”

That’s Microsoft CEO Satya Nadella, crisply explaining the company’s artificial-intelligence vision to me this afternoon shortly after he hosted a keynote at Microsoft’s Ignite conference for IT pros in Atlanta. But even if Microsoft only pursues AI opportunities that it considers to be core to its mission, it has a remarkably broad tapestry to work with. And the examples that were part of the keynote made that clear.

 

 

 

 

IBM Foundation collaborates with AFT and education leaders to use Watson to help teachers — from finance.yahoo.com

Excerpt:

ARMONK, N.Y., Sept. 28, 2016 /PRNewswire/ — Teachers will have access to a new, first-of-its-kind, free tool using IBM’s innovative Watson cognitive technology that has been trained by teachers and designed to strengthen teachers’ instruction and improve student achievement, the IBM Foundation and the American Federation of Teachers announced today.

Hundreds of elementary school teachers across the United States are piloting Teacher Advisor with Watson – an innovative tool by the IBM Foundation that provides teachers with a complete, personalized online resource. Teacher Advisor enables teachers to deepen their knowledge of key math concepts, access high-quality vetted math lessons and acclaimed teaching strategies and gives teachers the unique ability to tailor those lessons to meet their individual classroom needs.

Litow said there are plans to make Teacher Advisor available to all elementary school teachers across the U.S. before the end of the year.

 

 

In this first phase, Teacher Advisor offers hundreds of high-quality vetted lesson plans, instructional resources, and teaching techniques, which are customized to meet the needs of individual teachers and the particular needs of their students.

 

 

Also see:

teacheradvisor-sept282016

 

Educators can also access high-quality videos on teaching techniques to master key skills and bring a lesson or teaching strategy to life into their classroom.

 

 

From DSC:
Today’s announcement involved personalization and giving customized directions, and it caused my mind to go in a slightly different direction. (IBM, Google, Microsoft, Apple, Amazon, and others like Smart Sparrow are likely also thinking about this type of direction as well. Perhaps they’re already there…I’m not sure.)

But given the advancements in machine learning/cognitive computing (where example applications include optical character recognition (OCR) and computer vision), how much longer will it be before software is able to remotely or locally “see” what a third grader wrote down for a given math problem (via character and symbol recognition) and “see” what the student’s answer was while checking over the student’s work…if the answer was incorrect, the algorithms will likely know where the student went wrong.  The software will be able to ascertain what the student did wrong and then show them how the problem should be solved (either via hints or by showing the entire problem to the student — per the teacher’s instructions/admin settings). Perhaps, via natural language processing, this process could be verbalized as well.

Further questions/thoughts/reflections then came to my mind:

  • Will we have bots that teachers can use to teach different subjects? (“Watson may even ask the teacher additional questions to refine its response, honing in on what the teacher needs to address certain challenges.)
  • Will we have bots that students can use to get the basics of a given subject/topic/equation?
  • Will instructional designers — and/or trainers in the corporate world — need to modify their skillsets to develop these types of bots?
  • Will teachers — as well as schools of education in universities and colleges — need to modify their toolboxes and their knowledgebases to take advantage of these sorts of developments?
  • How might the corporate world take advantage of these trends and technologies?
  • Will MOOCs begin to incorporate these sorts of technologies to aid in personalized learning?
  • What sorts of delivery mechanisms could be involved? Will we be tapping into learning-related bots from our living rooms or via our smartphones?

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Also see:

 

 

 

Microsoft Bets Its Future on a Reprogrammable Computer Chip — from wired.com

Excerpt:

Burger, a computer chip researcher who had joined the company four years earlier, was pitching a new idea to the execs. He called it Project Catapult.

The tech world, Burger explained, was moving into a new orbit. In the future, a few giant Internet companies would operate a few giant Internet services so complex and so different from what came before that these companies would have to build a whole new architecture to run them. They would create not just the software driving these services, but the hardware, including servers and networking gear. Project Catapult would equip all of Microsoft’s servers—millions of them—with specialized chips that the company could reprogram for particular tasks.

Today, the programmable chips that Burger and Lu believed would transform the world—called field programmable gate arrays—are here. FPGAs already underpin Bing, and in the coming weeks, they will drive new search algorithms based on deep neural networks—artificial intelligence modeled on the structure of the human brain—executing this AI several orders of magnitude faster than ordinary chips could. As in, 23 milliseconds instead of four seconds of nothing on your screen. FPGAs also drive Azure, the company’s cloud computing service. And in the coming years, almost every new Microsoft server will include an FPGA. That’s millions of machines across the globe. “This gives us massive capacity and enormous flexibility, and the economics work,” Burger says. “This is now Microsoft’s standard, worldwide architecture.”

It’s a typical tangle of tech acronyms. CPUs. GPUs. TPUs. FPGAs. But it’s the subtext that matters. With cloud computing, companies like Microsoft and Google and Amazon are driving so much of the world’s technology that those alternative chips will drive the wider universe of apps and online services. Lee says that Project Catapult will allow Microsoft to continue expanding the powers of its global supercomputer until the year 2030. After that, he says, the company can move toward quantum computing.

 

If you doubt that we are on an exponential pace of change, you need to check these articles out! [Christian]

exponentialpaceofchange-danielchristiansep2016

 

From DSC:
The articles listed in
this PDF document demonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:

  • Educate and prepare our youth in K-12
  • Educate and prepare our young men and women studying within higher education
  • Restructure/re-envision our corporate training/L&D departments
  • Equip our freelancers and others to find work
  • Help people in the workforce remain relevant/marketable/properly skilled
  • Encourage and better enable lifelong learning
  • Attempt to keep up w/ this pace of change — legally, ethically, morally, and psychologically

 

PDF file here

 

One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.

 

 

 

From DSC:
The pace of technological development is moving extremely fast; the ethical, legal, and moral questions are trailing behind it (as is normally the case). But this exponential pace continues to bring some questions, concerns, and thoughts to my mind. For example:

  • What kind of future do we want? 
  • Just because we can, should we?
  • Who is going to be able to weigh in on the future direction of some of these developments?
  • If we follow the trajectories of some of these pathways, where will these trajectories take us? For example, if many people are out of work, how are they going to purchase the products and services that the robots are building?

These and other questions arise when you look at the articles below.

This is the 8th part of a series of postings regarding this matter.
The other postings are in the Ethics section.


 

Robot companions are coming into our homes – so how human should they be? — from theconversation.com

Excerpt:

What would your ideal robot be like? One that can change nappies and tell bedtime stories to your child? Perhaps you’d prefer a butler that can polish silver and mix the perfect cocktail? Or maybe you’d prefer a companion that just happened to be a robot? Certainly, some see robots as a hypothetical future replacement for human carers. But a question roboticists are asking is: how human should these future robot companions be?

A companion robot is one that is capable of providing useful assistance in a socially acceptable manner. This means that a robot companion’s first goal is to assist humans. Robot companions are mainly developed to help people with special needs such as older people, autistic children or the disabled. They usually aim to help in a specific environment: a house, a care home or a hospital.

 

 

 

The Next President Will Decide the Fate of Killer Robots—and the Future of War – from wired.com by Heather Roff and P.W. Singer

Excerpt:

The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka “killer robots.” The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.

 

 

Your new manager will be an algorithm — from stevebrownfuturist.com

Excerpt:

It sounds like a line from a science fiction novel, but many of us are already managed by algorithms, at least for part of our days. In the future, most of us will be managed by algorithms and the vast majority of us will collaborate daily with intelligent technologies including robots, autonomous machines and algorithms.

Algorithms for task management
Many workers at UPS are already managed by algorithms. It is an algorithm that tells the humans the optimal way to pack the back of the delivery truck with packages. The algorithm essentially plays a game of “temporal Tetris” with the parcels and packs them to optimize for space and for the planned delivery route–packages that are delivered first are towards the front, packages for the end of the route are placed at the back.

 

 

Beware of biases in machine learning: One CTO explains why it happens — from enterprisersproject.com by Minda Zetlin

Excerpt:

The Enterprisers Project (TEP): Machines are genderless, have no race, and are in and of themselves free of bias. How does bias creep in?

Sharp: To understand how bias creeps in you first need to understand the difference between programming in the traditional sense and machine learning. With programming in the traditional sense, a programmer analyses a problem and comes up with an algorithm to solve it (basically an explicit sequence of rules and steps). The algorithm is then coded up, and the computer executes the programmer’s defined rules accordingly.

With machine learning, it’s a bit different. Programmers don’t solve a problem directly by analyzing it and coming up with their rules. Instead, they just give the computer access to an extensive real-world dataset related to the problem they want to solve. The computer then figures out how best to solve the problem by itself.

 

 

Technology vs. Humanity – The coming clash between man and machine — from futuristgerd.com by Gerd Leonhard

Excerpt (emphasis DSC):

In his latest book ‘Technology vs. Humanity’, futurist Gerd Leonhard once again breaks new ground by bringing together mankind’s urge to upgrade and automate everything (including human biology itself) with our timeless quest for freedom and happiness.

Before it’s too late, we must stop and ask the big questions: How do we embrace technology without becoming it? When it happens—gradually, then suddenly—the machine era will create the greatest watershed in human life on Earth.

Digital transformation has migrated from the mainframe to the desktop to the laptop to the smartphone, wearables and brain-computer interfaces. Before it moves to the implant and the ingestible insert, Gerd Leonhard makes a last-minute clarion call for an honest debate and a more philosophical exchange.

 

 

Ethics: Taming our technologies
The Ethics of Invention: Technology and the Human Future — from nature.com by Sheila Jasanoff

Excerpt:

Technological innovation in fields from genetic engineering to cyberwarfare is accelerating at a breakneck pace, but ethical deliberation over its implications has lagged behind. Thus argues Sheila Jasanoff — who works at the nexus of science, law and policy — in The Ethics of Invention, her fresh investigation. Not only are our deliberative institutions inadequate to the task of oversight, she contends, but we fail to recognize the full ethical dimensions of technology policy. She prescribes a fundamental reboot.

Ethics in innovation has been given short shrift, Jasanoff says, owing in part to technological determinism, a semi-conscious belief that innovation is intrinsically good and that the frontiers of technology should be pushed as far as possible. This view has been bolstered by the fact that many technological advances have yielded financial profit in the short term, even if, like the ozone-depleting chlorofluorocarbons once used as refrigerants, they have proved problematic or ruinous in the longer term.

 

 

 

Robotics is coming faster than you think — from forbes.com by Kevin O’Marah

Excerpt:

This week, The Wall Street Journal featured a well-researched article on China’s push to shift its factory culture away from labor and toward robots. Reasons include a rise in labor costs, the flattening and impending decrease in worker population and falling costs of advanced robotics technology.

Left unsaid was whether this is part of a wider acceleration in the digital takeover of work worldwide. It is.

 

 

Adidas will open an automated, robot-staffed factory next year — from businessinsider.com

 

 

 

Beyond Siri, the next-generation AI assistants are smarter specialists — from fastcompany.com by Jared Newman
SRI wants to produce chatbots with deep knowledge of specific topics like banking and auto repair.

 

 

 

Machine learning
Of prediction and policy — from economist.com
Governments have much to gain from applying algorithms to public policy, but controversies loom

Excerpt:

FOR frazzled teachers struggling to decide what to watch on an evening off (DC insert: a rare event indeed), help is at hand. An online streaming service’s software predicts what they might enjoy, based on the past choices of similar people. When those same teachers try to work out which children are most at risk of dropping out of school, they get no such aid. But, as Sendhil Mullainathan of Harvard University notes, these types of problem are alike. They require predictions based, implicitly or explicitly, on lots of data. Many areas of policy, he suggests, could do with a dose of machine learning.

Machine-learning systems excel at prediction. A common approach is to train a system by showing it a vast quantity of data on, say, students and their achievements. The software chews through the examples and learns which characteristics are most helpful in predicting whether a student will drop out. Once trained, it can study a different group and accurately pick those at risk. By helping to allocate scarce public funds more accurately, machine learning could save governments significant sums. According to Stephen Goldsmith, a professor at Harvard and a former mayor of Indianapolis, it could also transform almost every sector of public policy.

But the case for code is not always clear-cut. Many American judges are given “risk assessments”, generated by software, which predict the likelihood of a person committing another crime. These are used in bail, parole and (most controversially) sentencing decisions. But this year ProPublica, an investigative-journalism group, concluded that in Broward County, Florida, an algorithm wrongly labelled black people as future criminals nearly twice as often as whites. (Northpointe, the algorithm provider, disputes the finding.)

 

 

‘Software is eating the world’: How robots, drones and artificial intelligence will change everything — from business.financialpost.com

 

 

Thermostats can now get infected with ransomware, because 2016 — from thenextweb.com by Matthew Hughes

 

 

Who will own the robots? — from technologyreview.com by David Rotman
We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates

 

 

Police Drones Multiply Across the Globe — from dronelife.com by Jason Reagan

 

 

 

LinkedIn lawsuit may signal a losing battle against ‘botnets’, say experts — from bizjournals.com by Annie Gaus

 

 

 

China’s Factories Count on Robots as Workforce Shrinks — from wsj.com by Robbie Whelan and Esther Fung
Rising wages, cultural changes push automation drive; demand for 150,000 robots projected for 2018

 

 

 

viv-ai-june2016

 

viv-ai-2-june2016

 

 

Researchers Are Growing Living Biohybrid Robots That Move Like Animals — from slate.com by Victoria Webster

 

 

 

Addendums on 9/14/16:

 

 

IdioT InTo IoT — from a2apple.com by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, Suzee Han, Michael Bartimer

Excerpt (emphasis DSC):

But an interesting twist to Negroponte’s paradigm is emerging. As we embed chips in physical devices to make them “smart,” bits and atoms are co-mingling in compelling ways. Collectively called the Internet of Things (IoT), connected devices are appearing in our homes, on the highway, in manufacturing plants, and on our wrists. Estimates vary widely but IDC has predicted that the IoT market will surpass $1.7 trillion by 2020.

Here again, Amazon’s arc is instructive. Its “Echo” smart speaker, powered by a digital assistant, “Alexa”, has sold over three million units in a little over a year. Echo enables users to play music with voice commands, as well as manage other integrated home systems, including lights, fans, door locks, and thermostats.

 

 

 

 

 

 

 

 

 

EmergingIOTPlatforms-2016

 

 

 

High-Profile Cyber Attacks on Physical Assets

 

 

 

 

The first truly awesome chatbot is a talking T. Rex — from fastcodesign.com by John Brownlee
National Geographic uses a virtual Tyrannosaur to teach kids about dinosaurs—and succeeds where other chatbots fail.

 

 

Excerpt:

As some have declared chatbots to be the “next webpage,” brands have scrambled to develop their own talkative bots, letting you do everything from order a pizza to rewrite your resume. The truth is, though, that a lot of these chatbots are actually quite stupid, and tend to have a hard time understanding natural human language. Sooner or later, users get frustrated bashing their heads up against the wall of a dim-witted bot’s AI.

So how do you design around a chatbot’s walnut-sized brain? If you’re National Geographic Kids UK, you set your chatbot to the task of pretending to be a Tyrannosaurus rex, a Cretaceous-era apex predator that really had a walnut-sized brain (at least comparatively speaking).

 

She’s called Tina the T. rex, and by making it fun to learn about dinosaurs, she suggests that education — rather than advertising or shopping — might be the real calling of chatbots.

 

 

 

Also relevant/see:

Honeybot-August2016

 

Why every college campus needs a chatbot — from venturebeat.com by John Brandon

Excerpts:

Dropping a child off at college is a stressful experience. I should know — I dropped off one last week and another today. It’s confusing because everything is so new, your child (who is actually a young adult, how did that happen?) is anxious, and you usually have to settle up on your finances.

This situation happens to be ideal for a chatbot, because the administrative staff is way too busy to handle questions in person or by phone. There might be someone directing you in the parking lot, but not everyone standing around in the student center knows how to submit FAFSA data.

One of the main reasons for thinking of this is that I would have used one myself today. It’s a situation where you want immediate, quick information without having to explain all of the background information. You just need the campus map or the schedule for the day — that’s it. You don’t want any extra frills.

 

 

From DSC:
My question is:

Will Instructional Designers, Technical Communicators, e-Learning Designers, Trainers, (and other positions as as well) going to have to know how to build chatbots in the future? Our job descriptions could be changing soon. Or will this kind of thing require more programming-related skills? Perhaps more firms like the one below could impact that situation…

 

 

Chatfuel-Aug2016

 

 

 

 
© 2024 | Daniel Christian