Imagination in the Augmented-Reality Age — from theatlantic.com by Georgia Perry
Pokémon Go may have reached the zenith of its popularity, but the game has far-reaching implications for the future of play.

Excerpt:

For young people today, however, it’s a different story. “They hardly play. If they do play it’s some TV script. Very prescribed,” Levin said. “Even if they have friends over, it’s often playing video games.”

That was before Pokémon Go, though.

The augmented-reality (AR) game that—since its release on July 6, attracted 21 million users and became one of the most successful mobile apps ever—has been praised for promoting exercise, facilitating social interactions, sparking new interest in local landmarks, and more. Education writers and experts have weighed in on its implications for teaching kids everything from social skills to geography to the point that such coverage has become cliché. And while it seems clear at this point that the game is a fad that has peaked—it’s been losing active players for over a week—one of the game’s biggest triumphs has, arguably, been the hope it’s generated about the future of play. While electronic games have traditionally caused kids to retreat to couches, here is one that did precisely the opposite.

 

 

What Pokémon Go is, however, is one of the first iterations of what will undeniably be many more AR games. If done right, some say the technology Go introduced to the world could bring back the kind of outdoor, creative, and social forms of play that used to be the mainstay of childhood. Augmented reality, it stands to reason, could revitalize the role of imagination in kids’ learning and development.

 

 

 

From DSC:
How much longer before the functionalities that are found in tools like Bluescape & Mural are available via tvOS-based devices? Entrepreneurs and VCs out there, take note. Given:

  • the growth of freelancing and people working from home and/or out on the road
  • the need for people to collaborate over a distance
  • the growth of online learning
  • the growth of active/collaborative learning spaces in K-12 and higher ed
  • the need for lifelong learning

…this could be a lucrative market. Also, it would be meaningful work…knowing that you are helping people learn and earn.

 


 

Mural-Aug-2016

 

 

Bluescape-Aug2016

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Take a step inside the classroom of tomorrow — from techradar.com by Nicholas Fearn
Making learning fun

 

 

Excerpt:

But the classroom of tomorrow will look very different. The latest advancements in technology and innovation are paving the way for an educational space that’s interactive, engaging and fun.

The conventions of learning are changing. It’s becoming normal for youngsters to use games like Minecraft to develop skills such as team working and problem solving, and for teachers to turn to artificial intelligence to get a better understanding of how their pupils are progressing in lessons.

Virtual reality is also introducing new possibilities in the classroom. Gone are the days of imagining what an Ancient Egyptian tomb might look like – now you can just strap on a headset and transport yourself there in a heartbeat.

The potential for using VR to teach history, geography and other subjects is incredible when you really think about it – and it’s not the only tech that’s going to shake things up.

Artificial intelligence is already doing groundbreaking things in areas like robotics, computer science, neuroscience and linguistics, but now they’re now entering the world of education too.

London-based edtech firm Digital Assess has been working on an AI app that has the potential to revolutionise the way youngsters learn.

With the backing of the UK Government, the company has been trialing its web-based application Formative Assess in schools in England.

Using semantic indexing and natural language processing in a similar way to social networking sites, an on-screen avatar – which can be a rubber duck or robot – quizzes students on their knowledge and provides them with individual feedback on their work.

 

 

 

From DSC:
After seeing the items below, I couldn’t help but wonder…what are the learning-related implications/applications of chatbots?

  • For K-12
  • For higher ed?
  • For corporate training/L&D?

 


 

Nat Geo Kids’ T-Rex chatbot aims to ‘get into the mindset of readers’ — from digiday.com by Lucinda Southern

Excerpt (emphasis DSC):

It seems even extinction doesn’t stop you from being on Facebook Messenger.

National Geographic Kids is the latest publisher to try out chatbots on the platform. Tina the T-Rex, naturally, is using Messenger to teach kids about dinosaurs over the summer break, despite a critical lack of opposable thumbs.

Despite being 65 million years old, Tina is pretty limited in her bot capabilities; she can answer from a pre-programmed script, devised by tech company Rehab Studio and tweaked by Chandler, on things like dinosaur diet and way of life.

 

 

Also see:

 

HolographicStorytellingJWT-June2016

HolographicStorytellingJWT-2-June2016

 

Holographic storytelling — from jwtintelligence.com by Jade Perry

Excerpt (emphasis DSC):

The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.

New Dimensions in Testimony’ is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.

Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from Conscience Display, viewers were able to ask Gutter’s holographic image questions that triggered relevant responses.

 

 

 

How might these enhancements to Siri and tvOS 10 impact education/training/learning-related offerings & applications? [Christian]

From DSC:
I read the article mentioned below.  It made me wonder how 3 of the 4 main highlights that Fred mentioned (that are coming to Siri with tvOS 10) might impact education/training/learning-related applications and offerings made possible via tvOS & Apple TV:

  1. Live broadcasts
  2. Topic-based searches
  3. The ability to search YouTube via Siri

The article prompted me to wonder:

  • Will educators and trainers be able to offer live lectures and training (globally) that can be recorded and later searched via Siri? 
  • What if second screen devices could help learners collaborate and participate in active learning while watching what’s being presented on the main display/”TV?”
  • What if learning taken this way could be recorded on one’s web-based profile, a profile that is based upon blockchain-based technologies and maintained via appropriate/proven organizations of learning? (A profile that’s optionally made available to services from Microsoft/LinkedIn.com/Lynda.com and/or to a service based upon IBM’s Watson, and/or to some other online-based marketplace/exchange for matching open jobs to potential employees.)
  • Or what if you could earn a badge or prove a competency via this manner?

Hmmm…things could get very interesting…and very powerful.

More choice. More control. Over one’s entire lifetime.

Heutagogy on steroids.

Micro-learning.

Perhaps this is a piece of the future for MOOCs…

 

MoreChoiceMoreControl-DSC

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

StreamsOfContent-DSC

 

 


 

Apple TV gets new Siri features in tvOS 10 — from iphonefaq.org by Fred Straker

Excerpt:

The forthcoming update to Apple TV continues to bring fresh surprises for owners of Apple’s set top box. Many improvements are coming to tvOS 10, including single-sign-on support and an upgrade to Siri’s capabilities. Siri has already opened new doors thanks to the bundled Siri Remote, which simplifies many functions on the Apple TV interface. Four main highlights are coming to Siri with tvOS 10, which is expected to launch this fall.

 


 

Addendum on 7/17/16:

CBS News Launches New Apple TV App Designed Exclusively for tvOS — from macrumors.com

Excerpt:

CBS today announced the launch of an all-new Apple TV app that will center around the network’s always-on, 24-hour “CBSN” streaming network and has been designed exclusively for tvOS. In addition to the live stream of CBSN, the app curates news stories and video playlists for each user based on previously watched videos.

The new app will also take advantage of the 4th generation Apple TV’s deep Siri integration, allowing users to tell Apple’s personal assistant that they want to “Watch CBS News” to immediately start a full-screen broadcast of CBSN. While the stream is playing, users can interact with other parts of the app to browse related videos, bookmark some to watch later, and begin subscribing to specific playlists and topics.

 

 

 

 

Stanford’s virtual reality lab cultivates empathy for the homeless — from kqed.org by Rachael Myrow

 

Excerpt:

The burgeoning field of Virtual Reality — or VR as it is commonly known — is a vehicle for telling stories through 360-degree visuals and sound that put you right in the middle of the action, be it at a crowded Syrian refugee camp, or inside the body of an 85-year-old with a bad hip and cataracts.  Because of VR’s immersive properties, some people describe the medium as “the ultimate empathy machine.” But can it make people care about something as fraught and multi-faceted as homelessness?

A study in progress at Stanford’s Virtual Human Interaction Lab explores that question, and I strapped on an Oculus Rift headset (one of the most popular devices people currently use to experience VR) to look for an answer.

A new way of understanding homelessness
The study, called Empathy at Scale, puts participants in a variety of scenes designed to help them imagine the experience of being homeless themselves.

 

What the bot revolution could mean for online learning — from huffingtonpost.com by Daily Bits Of

Excerpt:

We’re embracing the bot revolution
With these limitations in mind, we embrace the bot movement. In short, having our bite-sized courses delivered via messaging platforms will open up a lot of new benefits for our users.

  1. The courses will become social.
  2. It will become easier to consume a course via a channel that fits best for the course.
  3. The courses will become more interactive.
  4. Bots will remove some of the friction

 

 

The future of online learning will happen via messaging services.

 

 

 

 


 

Also relevant here:

 

WatsonTrainPreSchoolers-June2016

 

 

From DSC:
By posting such items, I’m not advocating that we remove teachers, professors, trainers, coaches, etc. from the education/training equations.  Rather, I am advocating that we use technology as tools for educating and training people — and using technologies to help people of all ages grow, and reinvent themselves when necessary.  Such tools should be used to help our overworked teachers, professors, trainers, etc. of the world in delivering excellent, effective elearning experiences for our students/employees.

 

 

 

 

Will “class be in session” soon on tools like Prysm & Bluescape? If so, there will be some serious global interaction, collaboration, & participation here! [Christian]

From DSC:
Below are some questions and thoughts that are going through my mind:

  • Will “class be in session” soon on tools like Prysm & Bluescape?
  • Will this type of setup be the next platform that we’ll use to meet our need to be lifelong learners? That is, will what we know of today as Learning Management Systems (LMS) and Content Management Systems (CMS) morph into this type of setup?
  • Via platforms/operating systems like tvOS, will our connected TVs turn into much more collaborative devices, allowing us to contribute content with learners from all over the globe?
  • Prysm is already available on mobile devices and what we consider a television continues to morph
  • Will second and third screens be used in such setups? What functionality will be assigned to the main/larger screens? To the mobile devices?
  • Will colleges and universities innovate into such setups?  Or will organizations like LinkedIn.com/Lynda.com lead in this space? Or will it be a bit of both?
  • How will training, learning and development groups leverage these tools/technologies?
  • Are there some opportunities for homeschoolers here?

Along these lines, are are some videos/images/links for you:

 

 

PrysmVisualWorkspace-June2016

 

PrysmVisualWorkspace2-June2016

 

BlueScape-2016

 

BlueScape-2015

 

 



 

 

DSC-LyndaDotComOnAppleTV-June2016

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

Also see:

kitchenstories-AppleTV-May2016

 

 

 

 


 

Also see:

 


Prysm Adds Enterprise-Wide Collaboration with Microsoft Applications — from ravepubs.com by Gary Kayye

Excerpt:

To enhance the Prysm Visual Workplace, Prysm today announced an integration with Microsoft OneDrive for Business and Office 365. Using the OneDrive for Business API from Microsoft, Prysm has made it easy for customers to connect Prysm to their existing OneDrive for Business environments to make it a seamless experience for end users to access, search for, and sync with content from OneDrive for Business. Within a Prysm Visual Workplace project, users may now access, work within and download content from Office 365 using Prysm’s built-in web capabilities.

 


 

 

 

Questions from DSC:

  • Which jobs/positions are being impacted by new forms of Human Computer Interaction (HCI)?
  • What new jobs/positions will be created by these new forms of HCI?
  • Will it be necessary for instructional technologists, instructional designers, teachers, professors, trainers, coaches, learning space designers, and others to pulse check this landscape?  Will that be enough? 
  • Or will such individuals need to dive much deeper than that in order to build the necessary skillsets, understandings, and knowledgebases to meet the new/changing expectations for their job positions?
  • How many will say, “No thanks, that’s not for me” — causing organizations to create new positions that do dive deeply in this area?
  • Will colleges and universities build and offer more courses involving HCI?
  • Will Career Services Departments get up to speed in order to help students carve out careers involving new forms of HCI?
  • How will languages and language translation be impacted by voice recognition software?
  • Will new devices be introduced to our classrooms in the future?
  • In the corporate space, how will training departments handle these new needs and opportunities?  How will learning & development groups be impacted? How will they respond in order to help the workforce get/be prepared to take advantage of these sorts of technologies? What does it mean for these staffs personally? Do they need to invest in learning more about these advancements?

As an example of what I’m trying to get at here, who all might be involved with an effort like Echo Dot?  What types of positions created it? Who all could benefit from it?  What other platforms could these technologies be integrated into?  Besides the home, where else might we find these types of devices?



WhatIsEchoDot-June2016

Echo Dot is a hands-free, voice-controlled device that uses the same far-field voice recognition as Amazon Echo. Dot has a small built-in speaker—it can also connect to your speakers over Bluetooth or with the included audio cable. Dot connects to the Alexa Voice Service to play music, provide information, news, sports scores, weather, and more—instantly.

Echo Dot can hear you from across the room, even while music is playing. When you want to use Echo Dot, just say the wake word “Alexa” and Dot responds instantly. If you have more than one Echo or Echo Dot, you can set a different wake word for each—you can pick “Amazon”, “Alexa” or “Echo” as the wake word.

 

 

Or how might students learn about the myriad of technologies involved with IBM’s Watson?  What courses are out there today that address this type of thing?  Are more courses in the works that will address this type of thing? In which areas (Computer Science, User Experience Design, Interaction Design, other)?

 

WhatIsIBMWatson-June2016

 

 

Lots of questions…but few answers at this point. Still, given the increasing pace of technological change, it’s important that we think about this type of thing and become more responsive, nimble, and adaptive in our organizations and in our careers.

 

 

 

 

 

 

We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.

–Edward Cornish

 


From DSC:
This is the fifth posting in a series that highlights the need for us to consider the ethical implications of the technologies that are currently being developed.  What kind of future do we want to have?  How can we create dreams, not nightmares?

In regards to robotics, algorithms, and business, I’m hopeful that the C-suites out there will keep the state of their fellow mankind in mind when making decisions. Because if all’s we care about is profits, the C-suites out there will gladly pursue lowering costs, firing people, and throwing their fellow mankind right out the window…with massive repercussions to follow.  After all, we are the shareholders…let’s not shoot ourselves in the foot. Let’s aim for something higher than profits.  Businesses should have a higher calling/purpose. The futures of millions of families are at stake here. Let’s consider how we want to use robotics, algorithms, AI, etc. — for our benefit, not our downfall.

Other postings:
Part I | Part II | Part III | Part IV

 


 

ethics-mary-meeker-june2016

From page 212 of
Mary Meeker’s annual report re: Internet Trends 2016

 

 

The White House is prepping for an AI-powered future — from wired.com by April Glaser

Excerpt (emphasis DSC):

Researchers disagree on when artificial intelligence that displays something like human understanding might arrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.

“The public should have an accurate mental model of what we mean when we say artificial intelligence,” says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts this summer to examine how to address an increasingly AI-powered world.

“One thing we know for sure is that AI is making policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter,” said Ed Felten, the deputy US chief of science and technology policy leading the White House’s summer of AI research. “Some of these issues will become more challenging over time as the technology progresses, so we’ll need to keep upping our game.”

 

 

Meet ‘Ross,’ the newly hired legal robot — from washingtonpost.com by Karen Turner

Excerpt:

One of the country’s biggest law firms has become the first to publicly announce that it has “hired” a robot lawyer to assist with bankruptcy cases. The robot, called ROSS, has been marketed as “the world’s first artificially intelligent attorney.”

ROSS has joined the ranks of law firm BakerHostetler, which employs about 50 human lawyers just in its bankruptcy practice. The AI machine, powered by IBM’s Watson technology, will serve as a legal researcher for the firm. It will be responsible for sifting through thousands of legal documents to bolster the firm’s cases. These legal researcher jobs are typically filled by fresh-out-of-school lawyers early on in their careers.

 

 

Confidential health care data divulged to Google’s DeepMind for new app — from futurism.com by Sarah Marquart

Excerpts (emphasis DSC):

Google DeepMind’s new app Streams hopes to use patient data to monitor kidney disease patients. In the process, they gained confidential data on more than 1.6 million patients, and people aren’t happy.

This sounds great, but the concern lies in exactly what kind of data Google has access to. There are no separate statistics available for people with kidney conditions, so the company was given access to all data including HIV test results, details about abortions, and drug overdoses.

In response to concerns about privacy, The Royal Free Trust said the data will remain encrypted so Google staff should not be able to identify anyone.

 

 

Two questions for managers of learning machines — from sloanreview.mit.edu by Theodore Kinni

Excerpt:

The first, which Dhar takes up in a new article on TechCrunch, is how to “design intelligent learning machines that minimize undesirable behavior.” Pointing to two high-profile juvenile delinquents, Microsoft’s Tay and Google’s Lexus, he reminds us that it’s very hard to control AI machines in complex settings.

The second question, which Dhar explores in an article for HBR.org, is when and when not to allow AI machines to make decisions.

 

 

All stakeholders must engage in learning analytics debate — from campustechnology.com by David Raths

Excerpt:

An Ethics Guide for Analytics?
During the Future Trends Forum session [with Bryan Alexander and George Siemens], Susan Adams, an instructional designer and faculty development specialist at Oregon Health and Science University, asked Siemens if he knew of any good ethics guides to how universities use analytics.

Siemens responded that the best guide he has seen so far was developed by the Open University in the United Kingdom. “They have a guide about how it will be used in the learning process, driven from the lens of learning rather than data availability,” he said.

“Starting with ethics is important,” he continued. “We should recognize that if openness around algorithms and learning analytics practices is important to us, we should be starting to make that a conversation with vendors. I know of some LMS vendors where you actually buy back your data. Your students generate it, and when you want to analyze it, you have to buy it back. So we should really be asking if it is open. If so, we can correct inefficiencies. If an algorithm is closed, we don’t know how the dials are being spun behind the scenes. If we have openness around pedagogical practices and algorithms used to sort and influence our students, we at least can change them.”

 

 

From DSC:
Though I’m generally a fan of Virtual Reality (VR) and Augmented Reality (AR), we need to be careful how we implement it or things will turn out as depicted in this piece from The Verge. We’ll need filters or some other means of opting in and out of what we want to see.

 

AR-Hell-May2016

 

 

What does ethics have to do with robots? Listen to RoboPsych Podcast discussion with roboticist/lawyer Kate Darling https://t.co/WXnKOy8UO2
— RoboPsych (@RoboPsychCom) April 25, 2016

 

 

 

Retail inventory robots could replace the need for store employees — from interestingengineering.com by Trevor English

Excerpt:

There are currently many industries that will likely be replaced with robots in the coming future, and with retail being one of the biggest industries across the world, it is no wonder that robots will slowly begin taking human’s jobs. A robot named Tory will perform inventory tasks throughout stores, as well as have the capability of directing customers to where what they are looking for is. Essentially, a customer will type in a product into the robot’s interactive touch screen, and it will start driving to the exact location. It will also conduct inventory using RFID scanners, and overall, it will make the retail process much more efficient. Check out the video below from the German Robotics company Metre Labs who are behind the retail robot.

 

RobotsRetail-May2016

 

From DSC:
Do we really want to do this?  Some say the future will be great when the robots, algorithms, AI, etc. are doing everything for us…while we can just relax. But I believe work serves a purpose…gives us a purpose.  What are the ramifications of a society where people are no longer working?  Or is that a stupid, far-fetched question and a completely unrealistic thought?

I’m just pondering what the ramifications might be of replacing the majority of human employees with robots.  I can understand about using robotics to assist humans, but when we talk about replacing humans, we had better look at the big picture. If not, we may be taking the angst behind the Occupy Wall Street movement from years ago and multiplying it by the thousands…perhaps millions.

 

 

 

 

Automakers, consumers both must approach connected cars cautiously — from nydailynews.com by Kyle Campbell
Several automakers plan to have autonomous cars ready for the public by 2030, a development that could pose significant safety and security concerns.

Excerpt:

We’re living in the connected age. Phones can connect wirelessly to computers, watches, televisions and anything else with access to Wi-Fi or Bluetooth and money can change hands with a few taps of a screen. Digitalization allows data to flow quicker and more freely than ever before, but it also puts the personal information we entrust it with (financial information, geographic locations and other private details) at a far greater risk of ending up in the wrong hands.

Balancing the seamless convenience customers desire with the security they need is a high-wire act of the highest order, and it’s one that automakers have to master as quickly and as thoroughly as possible.

Because of this, connected cars will potentially (and probably) become targets for hackers, thieves and possibly even terrorists looking to take advantage of the fledgling technology. With a wave of connected cars (220 million by 2020, according to some estimates) ready to flood U.S. roadways, it’s on both manufacturers and consumers to be vigilant in preventing the worst-case scenarios from playing out.

 

 

 

Also, check out the 7 techs being discussed at this year’s Gigaom Change Conference:

 

GigaOMChange-2016

 

 

Scientists are just as confused about the ethics of big-data research as you — wired.com by Sarah Zhang

Excerpt:

And that shows just how untested the ethics of this new field of research is. Unlike medical research, which has been shaped by decades of clinical trials, the risks—and rewards—of analyzing big, semi-public databases are just beginning to become clear.

And the patchwork of review boards responsible for overseeing those risks are only slowly inching into the 21st century. Under the Common Rule in the US, federally funded research has to go through ethical review. Rather than one unified system though, every single university has its own institutional review board, or IRB. Most IRB members are researchers at the university, most often in the biomedical sciences. Few are professional ethicists.

 

 

 

 


Addendums on 6/3 and 6/4/16:

  • Apple supplier Foxconn replaces 60,000 humans with robots in China — from marketwatch.com
    Excerpt:
    The first wave of robots taking over human jobs is upon us. Apple Inc. AAPL, +0.02%  supplier Foxconn Technology Co. 2354, +0.95% has replaced 60,000 human workers with robots in a single factory, according to a report in the South China Morning Post, initially published over the weekend. This is part of a massive reduction in headcount across the entire Kunshan region in China’s Jiangsu province, in which many Taiwanese manufacturers base their Chinese operations.
  • There are now 260,000 robots working in U.S. factories — from marketwatch.com by Jennifer Booton (back from Feb 2016)
    Excerpt:
    There are now more than 260,000 robots working in U.S. factories. Orders and shipments for robots in North America set new records in 2015, according to industry trade group Robotic Industries Association. A total of 31,464 robots, valued at a combined $1.8 billion, were ordered from North American companies last year, marking a 14% increase in units and an 11% increase in value year-over-year.
  • Judgment Day: Google is making a ‘kill-switch’ for AI — from futurism.com
    Excerpt:
    Taking Safety Measures
    DeepMind, Google’s artificial intelligence company, catapulted itself into fame when its AlphaGo AI beat the world champion of Go, Lee Sedol. However, DeepMind is working to do a lot more than beat humans at chess and Go and various other games. Indeed, its AI algorithms were developed for something far greater: To “solve intelligence” by creating general purpose AI that can be used for a host of applications and, in essence, learn on their own.This, of course, raises some concerns. Namely, what do we do if the AI breaks…if it gets a virus…if it goes rogue?In a paper written by researchers from DeepMind, in cooperation with Oxford University’s Future of Humanity Institute, scientists note that AI systems are “unlikely to behave optimally all the time,” and that a human operator may find it necessary to “press a big red button” to prevent such a system from causing harm. In other words, we need a “kill-switch.”
  • Is the world ready for synthetic life? Scientists plan to create whole genomes — from singularityhub.com by Shelly Fan
    Excerpt:
    “You can’t possibly begin to do something like this if you don’t have a value system in place that allows you to map concepts of ethics, beauty, and aesthetics onto our own existence,” says Endy. “Given that human genome synthesis is a technology that can completely redefine the core of what now joins all of humanity together as a species, we argue that discussions of making such capacities real…should not take place without open and advance consideration of whether it is morally right to proceed,” he said.
  • This is the robot that will shepherd and keep livestock healthy — from thenextweb.com
    Excerpt:
    The Australian Centre for Field Robotics (ACFRis no stranger to developing innovative ways of modernizing agriculture. It has previously presented technologies for robots that can measure crop yields and collect data about the quality and variability of orchards, but its latest project is far more ambitious: it’s building a machine that can autonomously run livestock farms. While the ACFR has been working on this technology since 2014, the robot – previously known as ‘Shrimp’ – is set to start a two-year trial next month. Testing will take place at several farms nearby New South Wales province in Australia.

 

 

 

 

 

 

From DSC:
The following graphic from “The Future of Work and Learning 1: The Professional Ecosystem” by Jane Hart is a wonderful picture of a learning ecosystem:

profecosystem-Jane-Hart-May2016

 

Note that such an ecosystem involves people, tools, processes and more — and is constantly changing. As Jane comments:

But the point to make very clear is that a PES is not a prescribed entity – so everyone’s PES will be different. It is also not a fixed entity – organisational elements will change as the individual changes jobs, and personal elements will change as the individual adds (or removes) external people, content and tools in order to maintain an ecosystem that best fits their needs.

Jane also mentions the concept of flows of new ideas and resources. I call these streams of content, and we need to both contribute items to these streams as well as take things from them.

 

StreamsOfContent-DSC

 

So while Jane and I are on the same page on the vast majority  of these concepts (and I would add Harold Jarche to this picture as well, whom Jane mentions with his Personal Knowledge Mastery (PKM) process), Jane broadens the scope of what I normally refer to as a learning ecosystem when she mentions, “it isn’t just about learning, but just as much about doing a job.”

Anyway, thanks Jane for your posting here.

 

 

 

Why can’t the “One Day University” come directly into your living room — 24×7? [Christian]

  • An idea/question from DSC:
    Looking at the article below, I wonder…“Why can’t the ‘One Day University‘ come directly into your living room — 24×7?”

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

This is why I’m so excited about the “The Living [Class] Room” vision. Because it is through that vision that people of all ages — and from all over the world — will be able to constantly learn, grow, and reinvent themselves (if need be) throughout their lifetimes. They’ll be able to access and share content, communicate and discuss/debate with one another, form communities of practice, go through digital learning playlists (like Lynda.com’s Learning Paths) and more.  All from devices that represent the convergence of the television, the telephone, and the computer (and likely converging with the types of devices that are only now coming into view, such as Microsoft’s Hololens).

 

LearningPaths-LyndaDotCom-April2016

 

You won’t just be limited to going back to college for a day — you’ll be able to do that 24×7 for as many days of the year as you want to.

Then when some sophisticated technologies are integrated into this type of platform — such as artificial intelligence, cloud-based learner profiles, algorithms, and the ability to setup exchanges for learning materials — we’ll get some things that will blow our minds in the not too distant future! Heutagogy on steroids!

 

 


 

 

Want to go back to college? You can, for a day. — from washingtonpost.com by Valerie Strauss

Excerpt:

Have you ever thought about how nice it would be if you could go back to college, just for the sake of learning something new, in a field you don’t know much about, with no tests, homework or studying to worry about? And you won’t need to take the SAT or the ACT to be accepted? You can, at least for a day, with something called One Day University, the brainchild of a man named Steve Schragis, who about a decade ago brought his daughter to Bard College as a freshman and thought that he wanted to stay.

One Day University now financially partners with dozens of newspapers — including The Washington Post — and a few other organizations to bring lectures to people around the country. The vast majority of the attendees are over the age 50 and interested in continuing education, and One Day University offers them only those professors identified by college students as fascinating. As Schragis says, it doesn’t matter if you are famous; you have to be a great teacher. For example, Schragis says that since Bill Gates has never shown to be one, he can’t teach at One Day University.

We bring together these professors, usually four at at a time, to cities across the country to create “The Perfect Day of College.” Of course we leave out the homework, exams, and studying! Best if there’s real variety, both male and female profs, four different schools, four different subjects, four different styles, etc. There’s no one single way to be a great professor. We like to show multiple ways to our students.

Most popular classes are history, psychology, music, politics, and film. Least favorite are math and science.

 

 


See also:


 

 

OneDayUniversity-1-April2016

 

OneDayUniversity-2-April2016

 

 

 


Addendum:


 

 

lyndaDotcom-onAppleTV-April2016

 

We know the shelf-life of skills are getting shorter and shorter. So whether it’s to brush up on new skills or it’s to stay on top of evolving ones, Lynda.com can help you stay ahead of the latest technologies.

 

 

Apple TV: Apple Unveils New “Live Tune-In” Feature With Latest tvOS Update — from idigitaltimes.com by Michael Gardiner

Excerpt (excerpt):

If you happen to have a fourth generation Apple TV, then there’s some good news: Apple’s “Live Tune-In” feature is officially live. This feature allows an Apple TV user to ask Siri to automatically transport them to the livestream of a tvOS app from the home screen, an idea that could go a long way in making the Apple TV’s app navigation less painful.

 

 

From DSC:
What if these live streams were live lectures?

 

 

 

Everything announced at Facebook’s F8 conference.

 

Facebook-10YearRoadmap-AsOfApril2016

 

Facebook-AI-April2016

 

 

Everything Facebook announced at F8 2016 — from thenextweb.com by Natt Garun

Excerpt:

Two days of Facebook’s F8 Conference have come and gone, so here’s a look back at all the things you may have missed from the event. To learn more about each topic, click the links below for full stories.

 

 

 

The 5 Biggest Things Facebook Announced This Week — from time.com by Victor Luckerson
Messaging bots, live video in drones and 360-degree cameras

Excerpt:

In a wide-ranging keynote April 12, CEO Mark Zuckerberg laid out the company’s 10-year plan to “Give everyone the power to share anything with anyone.” To do so, Facebook plans to move far beyond its original role as a social network. The firm aims to launch new virtual reality projects, beam Internet across the world using drones and unleash complex artificial-intelligence bots that can fulfill our every digital need.

Before all that can happen, Facebook has to deal with the here and now of improving its current products. On that front, the company made several announcements that will reshape the way people and brands use Facebook and its constellation of apps this year.

Here’s a breakdown of Facebook’s biggest F8 announcements.

 

 

 

How Facebook’s Social VR Could Be The Killer App For Virtual Reality — from fastcompany.com by
It’s going to take time, but Facebook is committed to developing realistic and satisfying social experiences in VR.

Excerpt:

When Facebook bought Oculus VR in 2014 for $2 billion, many observers wondered what the world’s largest social networking company wanted with a virtual reality company whose then-unreleased system was pretty much all about single-user experiences. Today at F8, Facebook’s annual developers conference in San Francisco, the company showed off some of the most fleshed-out examples of how it sees VR as a rich social tool. During his F8 keynote address, CTO Mike Schroepfer talked at length about what Facebook explicitly calls “social VR.”

 

Facebook Shows Us What It Means to Be ‘Social’ in Virtual Reality (Video) — from recode.net by Kurt Wagner

Excerpt:

One of the key knocks on virtual reality, the gamer-heavy industry Facebook is betting big on, is that wearing a headset intended to block out the real world in favor of a virtual one isn’t a very social activity. Facebook, an inherently social company, thinks it can change that.

At its F8 developer conference on Wednesday Facebook demoed what it calls “social VR,” which is exactly what it sounds like: Connecting two or more real people in a virtual world.

 

 

Oculus Demos VR Selfie Sticks and 360 Photo Spheres — from vrscout.com by Jonathan Nafarrete

Excerpt:

During the second day keynote of Facebook’s F8 Developer Conference, Oculus showed off an entirely new way to get social in VR.

On stage, Facebook’s CTO Mike Schroepfer showed how 360-degree photos can instantly be shared with a friend in VR, with 360 photos appearing as handheld spheres. You can virtually grab the floating sphere and smash it against your face, you will then be instantly teleported into the content of the spherical photo.

 

 

 

 

Oculus Social VR Full Demo – Facebook F8 Conference 2016

 

 

 

Has Facebook achieved what AOL could have a generation ago? — from medium.com by Gary Vaynerchuk

Excerpt:

[On 4/12/16], Facebook opened up Instant Articles to all publishers. If you don’t know, Instant Articles are Facebook’s new way to natively load articles within the app using an adapted RSS feed. These native articles, which have a lightning bolt in the top right corner, load in half a second?—?10x faster than if user was to click out to a website. From what I’ve seen so far, they really do load instantaneously and have a great layout and user experience. And if you’re paying attention, you’ll understand that this is their third push for native media consumption: first photos, then videos, and now written content.

However, as of [4/13/16], Instant Articles become available to anybody with a Facebook page and a blog. This is a key opportunity for small blogs and publications to get ahead of the game and really understand how best to use the new product.

Has Facebook been able to achieve what AOL could have a generation ago? By that I mean: Has Facebook become a layer on top of the Internet itself?

 

FacebookInstantArticles-April2016

 

 

 

From DSC:
Let’s take some of the same powerful concepts (as mentioned below) into the living room; then let’s talk about learning-related applications.


 

Google alum launches MightyTV for cable cord-cutters — from bizjournals.com by Anthony Noto

Excerpts (emphasis DSC):

MightyTV, which has raised more than $2 million in venture funding to date, launched today with a former Google exec at the helm. The startup’s technology incorporates machine learning with computer-generated recommendations in what is being touted as a “major step up” from other static list-making apps.

In this age of Roku and Apple TV, viewers can choose what to watch via the apps they’ve downloaded. MightyTV curates those programs — shows, movies and YouTube videos — into one app without constantly switching between Amazon, HBO, Netflix or Hulu.

Among the features included on MightyTV are:

*  A Tinder-like interface that allows users to swipe through content, allowing the service to learn what you’d like to watch
*  An organizer tool that lists content via price range
A discovery tool to see what friends are watching
*  Allows for group viewings and binge watching

 

From DSC:
What if your Apple TV could provide these sorts of functionalities for services and applications that are meant for K-12 education, higher education, and/or corporate training and development?

Instead of Amazon, HBO, Netflix or Hulu — what if the interface would present you with a series of learning modules, MOOCs, and/or courses from colleges and universities that had strong programs in the area(s) that you wanted to learn about?

That is, what if a tvOS-based system could learn more about you and what you are trying to learn about? It could draw upon IBM Watson-like functionality to provide you with a constantly morphing, up-to-date recommendation list of modules that you should look at.  Think microlearning. Reinventing oneself. Responding to the exponential pace of change. Pursuing one’s passions. More choice/more control. Lifelong learning. Staying relevant. Surviving.

…all from a convenient, accessible room in your home…your living room.

A cloud-based marketplace…matching learners with providers.

Now tie those concepts in with where LinkedIn.com and Lynda.com are going and how people will get jobs in the future.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

BlueJeans Unveils Enterprise Video Cloud as Businesses Hang Up on Audio-Only Communications
Global Enterprises Adopt Video as a First-Line Communications Strategy

Excerpt (emphasis DSC):

April 12, 2016 — Mountain View, CA—BlueJeans Network, the global leader in cloud-based video communication services, today unveiled the Enterprise Video Cloud, a comprehensive platform built for today’s globally distributed, modern workforce with video communications at the core. New global research shows that 85% of employees are already using video in the workplace and 72% believe that video will transform the way they communicate at work.

“There is a transformation happening among business today – face-to-face video is quickly rising as the preferred communications medium, offering new opportunities for deeper personal relations and outreach, as well as for improved internal and external collaboration,” said Krish Ramakrishnan, CEO of BlueJeans. “Once people experience the power of video, they ‘hang-up’ on traditional conference calling. We are seeing this happen with the emergence of video cultures that power the most innovative cultures—from Facebook and Netflix to Viacom and Del Monte.”

 

From DSC:
I wonder if we’ll see video communication vendors such as BlueJeans or The Video Call Center merge with vendors like Bluescape, Mezzanine, or T1V with their collaboration tools. If so, some serious collaboration could all happen…again, right from within your living room!

 

 
© 2024 | Daniel Christian