From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

Here’s another practical application of IBM Watson technology in healthcare: Boston Children’s Hospital is going to tap the supercomputing platform to improve diagnosis and treatment of rare diseases.

Researchers at the Harvard-affiliated hospital’s Manton Center for Orphan Disease Research will train Watson in nephrology by reading medical literature and scanning data on mutations for a kidney disease known as steroid-resistant nephrotic syndrome. They will then give Watson retrospective genomic data from patients in an effort to teach the computer to assist physicians in interpreting genome sequences as they look for abnormalities.

 

 

Wearables will see mass adoption via educated patients and digital health stores — from medcitynews.com by Shahid Shah

Excerpt:

In a fees-for-services (volume-driven) world, selling healthcare products and services to individual institutions is certainly time-consuming but reasonably straightforward. In an outcomes-driven (fees for value) world driven by shared risks and shared rewards, selling healthcare solutions across multiple disciplines, multiple stakeholders, and multiple institutions is much harder and even more time-consuming. That’s because there’s no easy buyer to identify. Population health is all the rage but our current $3 trillion + healthcare industry was never devised nor incentivized to work together as a team for long-term patient or population benefits (it’s reimbursed mainly for episodic care).

Our country’s healthcare industry is more about sick care and episodic transactions rather than longitudinal care. But, since we are moving to population and outcomes driven care where the patient is more responsible for their own care management and payment, it would seem patient education and digital health tools are more important than ever. So, perhaps we need to get together and innovate around how we’re going to present next-generation solutions from across multiple innovators and showcase them to patients and their caregivers.

 

 

Watch users claim Apple wearable improves health — from fiercemobilehealthcare.com by Judy Mottl

Excerpt:

Nearly two-thirds of Apple Watch users are exercising more often and for longer periods of time, and 72 percent claim the wearable is improving their health and fitness levels, according to a new report.

 

 

IBM forms new health data analytics unit, extends Apple partnership— from zdnet.com by Charlie Osborne
With the help of Apple, acquisitions and new partnerships, Big Blue plans to tap into the vast amount of data offered by health-tracking devices.

 

 

After medical school, IBM’s Watson gets ready for Apple health apps — from zdnet.com by David Shamah
The Watson Health Cloud – set to become an important component of Apple’s health platform – is targeting medical care, IBM says.

Excerpt:

“Watson went to medical school, and now it’s set to graduate,” said Dr Aya Soffer, director of big data and cognitive analytics at the Israel facility. “We’ve had it study the medical literature, and now it’s ready to apply its natural language processing skills to real-life applications.”

Just in time, too. Last week, IBM announced the launch of Watson Health Cloud to “provide a secure and open platform for physicians, researchers, insurers, and companies focused on health and wellness solutions”.

The platform will be used by health companies Johnson and Johnson and Medtronic, as well as by Apple. The Mac maker has its own platform and hopes to become a top health company itself. It has established a new business unit, called Watson Health Cloud, to administer the big data apps that will use Watson’s intelligent analysis and understanding of medical data.

 

 

Addendum on 11/13/15:

  • NYU Medical Students Learning How to Analyze Big Data — from imedicalapps.com by Brian Wu
    Excerpt:
    Big data is changing the way information is shared in the medical field. Current technologies such as IBM’s Watson are working to merge data from multiple sources to make it easier to access as well as share. Even ten years ago most doctors would not have known anything about big data, and it was definitely not taught in medical school. Today big data is becoming a crucial part of the healthcare field including the diagnosis and treatment of patients. It is important that doctors not only understand the importance of data but know how to properly access and interpret the data. The NYU School of Medicine requires that its first and second-year students complete a health care by the numbers project. Students are given access to a giant database with more than 5 million anonymous records, which includes information on every hospital patient in the state for the preceding two years.
 

Introducing Coursera for Apple TV: Bringing Online Learning to Your Living Room — from blog.coursera.org

 

Apple TV

 

Excerpt (emphasis DSC):

We’re thrilled to announce that Coursera content will now be available on Apple TV.

Since our beginning, one of our primary goals has been to make learning more accessible for everyone. Our mobile platform brought an on-demand learning experience to people’s busy, on-the-go lifestyles, and now, we’re extending availability to your home. Regardless of where in the world you are located, you’ll now be able to learn from top university professors and renowned experts without the expense of travel or tuition.

TV availability isn’t only a first for Coursera—it marks Apple TV’s first ever introduction of online learning to its platform. Everything you can do online at Coursera, you’ll now be able to do from the comfort of your own living room: browse our entire catalogue of courses, peruse new topics, and watch videos from some of the top academic and industry experts.

 

From DSC:
Coursera takes us one step closed to a very powerful learning platform — one that in the future will provide a great deal of intelligence behind the scenes.  It’s likely that we will be using personalized, adaptable, digital learning playlists while enjoying some serious levels of interactivity…while also making use of web-based learner profiles (the data from which will either be hosted at places like LinkedIn.com or will be fed into employers’ and universities’ competency-based databases).  The application development for tvOS should pick up greatly, especially if the collaboration capabilities are there.

For example, can you imagine marrying the functionalities that Bluescape provides with the reach, flexibility, convenience, and affordances that are unfolding with the new Apple TV?

Truly, some mind-blowing possibilities are developing.  In the not too distant future, lifelong learning won’t ever be the same again (not to mention project-related work).

This is why I’m big on the development and use of
team of specialists — as an organization may have
a harder time competing in the future without one.

 

 

BlueScape-2015

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Project Loon is set to circle the planet with Internet balloons in 2016 — from sciencealert.com by Peter Dockrill
Around the world in 300 balloons.

Excerpt:

Google’s Project Loon is a massively ambitious plan to provide Internet connectivity to areas of the planet that don’t already enjoy good access to the web. How? Via a huge fleet of helium balloons that hang in the stratosphere 20 kilometres above the surface, assembling to form a high-tech communication network that beams the web to the surface.

And the undertaking is only getting more ambitious, with the company announcing this week that it plans to circle the planet with a ring of Project Loon balloons that will provide a perpetual data service for those living underneath its path.

 

ProjectLoon-ScienceAlert-Nov2015

 

 

Also see:

ProjectLoon-Google-Nov2015

Excerpt:

Many of us think of the Internet as a global community. But two-thirds of the world’s population does not yet have Internet access. Project Loon is a network of balloons traveling on the edge of space, designed to connect people in rural and remote areas, help fill coverage gaps, and bring people back online after disasters.

 

From DSC:
I just reviewed Gerd Leonhard’s recent item — Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book .  Gerd asserts that we need to put together a “Digital Ethics Treaty” as soon as possible. (I agree.) So my thinking is influenced by some of his work in that area as I write this.  Gerd raises some solid questions and points about technology and what we should be doing with it (as opposed to what we can do with it).

Anyway, Project Loon could be a great thing for the world — i.e., getting more people connected to the Internet so that they can collaborate with, teach, and learn from others throughout the globe.  But there can be unintended consequences to our “technological progress” sometimes. Who controls what? What are they doing with these technologies and can we trust them? What’s someone’s motivation? In this case, it seems like a good thing…but the questions are worth discussing/asking. Especially as we’re moving forward at breakneck speeds!

 

 

Imagine what learning could look like w/ the same concepts found in Skreens!


From DSC:
Imagine what learning could look like w/ the same concepts found in the
Skreens kickstarter campaign?  Where you can use your mobile device to direct what you are seeing and interacting with on the larger screen?  Hmmm… very interesting indeed! With applications not only in the home (and on the road), but also in the active classroom, the boardroom, and the training room.


See
Skreens.com
&
Learning from the Living [Class] Room


 

DanielChristian-AVariationOnTheSkreensTheme-9-29-15

 

 

Skreens-Sept2015Kickstarter

 

Skreens2-Sept2015Kickstarter

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

From DSC:
Some of the phrases and concepts that come to my mind:

  • tvOS-based apps
  • Virtual field trips while chatting or videoconferencing with fellow learners about that experience
  • Virtual tutoring
  • Global learning for K-12, higher ed, the corporate world
  • Web-based collaborations and communications
  • Ubiquitous learning
  • Transmedia
  • Analytics / data mining / web-based learner profiles
  • Communities of practice
  • Lifelong learning
  • 24×7 access
  • Reinvent
  • Staying relevant
  • More choice. More control.
  • Participation.
  • MOOCs — or what they will continue to morph into
  • Second screens
  • Mobile learning — and the ability to quickly tie into your learning networks
  • Ability to contact teachers, professors, trainers, specialists, librarians, tutors and more
  • Language translation
  • Informal and formal learning, blended learning, active learning, self-directed learning
  • The continued convergence of the telephone, the television, and the computer
  • Cloud-based apps for learning
  • Flipping the classroom
  • Homeschooling
  • Streams of content
  • …and more!

 

 

 

 

Addendum:

Check out this picture from Meet the winners of #RobotLaunch2015

Packed house at WilmerHale for the Robot Launch 2015 judging – although 2/3rds of the participants were attending and pitching remotely via video and web conferencing.

 

Now we’re talking! One step closer! “The future of TV is apps.” — per Apple’s CEO, Tim Cook

OneStepCloser-DanielChristian-Sept2015

 

From DSC:
We’ll also be seeing the integration of the areas listed below with this type of “TV”-based OS/platform:

  • Artificial Intelligence (AI)
  • Data mining and analytics
  • Learning recommendation engines
  • Digital learning playlists
  • New forms of Human Computer Interfaces (HCI)
  • Intelligent tutoring
  • Social learning / networks
  • Videoconferencing with numerous other learners from across the globe
  • Virtual tutoring, virtual field trips, and virtual schools
  • Online learning to the Nth degree
  • Web-based learner profiles
  • Multimedia (including animations, simulations, and more)
  • Advanced forms of digital storytelling
  • and, most assuredly, more choice & more control.

Competency-based education and much lower cost alternatives could also be possible with this type of learning environment. The key will be to watch — or better yet, to design and create — what becomes of what we’re currently calling the television, and what new affordances/services the “TV” begins to offer us.

 

MoreChoiceMoreControl-DSC

 

 

 

From Apple’s website:

Apple Brings Innovation Back to Television with The All-New Apple TV
The App Store, Siri Remote & tvOS are Coming to Your Living Room

Excerpt:

SAN FRANCISCO — September 9, 2015 — Apple® today announced the all-new Apple TV®, bringing a revolutionary experience to the living room based on apps built for the television. Apps on Apple TV let you choose what to watch and when you watch it. The new Apple TV’s remote features Siri®, so you can search with your voice for TV shows and movies across multiple content providers simultaneously.

The all-new Apple TV is built from the ground up with a new generation of high-performance hardware and introduces an intuitive and fun user interface using the Siri Remote™. Apple TV runs the all-new tvOS™ operating system, based on Apple’s iOS, enabling millions of iOS developers to create innovative new apps and games specifically for Apple TV and deliver them directly to users through the new Apple TV App Store™.

tvOS is the new operating system for Apple TV, and the tvOS SDK provides tools and APIs for developers to create amazing experiences for the living room the same way they created a global app phenomenon for iPhone® and iPad®. The new, more powerful Apple TV features the Apple-designed A8 chip for even better performance so developers can build engaging games and custom content apps for the TV. tvOS supports key iOS technologies including Metal™, for detailed graphics, complex visual effects and Game Center, to play and share games with friends.

 

Addendum on 9/11/15:

 

HBX Intros HBX Live Virtual Classroom — from campustechnology.com by Rhea Kelly

Excerpt:

Harvard Business School‘s HBX digital learning initiative today launched a virtual classroom designed to reproduce the intimacy and synchronous interaction of the case method in a digital environment. With HBX Live, students from around the world can log in concurrently to participate in an interactive discussion in real time, guided by an HBS professor.

Built to mimic the amphitheater-style seating of an HBS classroom, the HBX Live Studio features a high-resolution video wall that can display up to 60 participants. Additional students can audit sessions via an observer model. An array of stationary and roaming cameras capture the action, allowing viewers to see both the professor and fellow students.

 

HBX Live

HBX Live’s virtual amphitheater
(PRNewsFoto/Harvard Business School)

 

Also see HBX Live in Action

I think that this type of setup could also be integrated with a face-to-face classroom as well (given the right facilities). The HBX Live concept fits right into a piece of my vision entitled, “Learning from the Living [Class] Room.”

Several words/phrases comes to mind:

  • Convenience. I don’t have to travel to another city, state, country. That type of convenience and flexibility is the basis of why many learners take online-based courses in the first place.
  • Global — learning from people of different cultures, races, backgrounds, life experiences.
  • The opportunities are there to increase one‘s cultural awareness.
  • HBX Live is innovative; in fact, Harvard is upping it’s innovation game yet again — showing a firm grasp/display of understanding that they realize that the landscape of higher education is changing and that institutions of traditional higher education need to adapt.
  • Harvard is willing to experiment and to identify new ways to leverage technologies — taking advantage of the affordances that various technologies offer.

BTW, note how the use of teams is a requirement here.

 

HBXLive-8-26-2015

 

 

Also see:

Harvard Business School really has created the classroom of the future — from fortune.com by  John A. Byrne

Excerpt:

Anand, meantime, faces the images of 60 students portrayed on a curved screen in front of him, a high-resolution video wall composed of more than 6.2 million pixels that mimics the amphitheater-style seating of a class HBS tiered classroom

 

From DSC:
I’d recommend that historians, geographers, geologists, archeologists — and many others as well — keep a sharp and steady eye on what’s happening with Virtual Reality (VR) — as the affordances that VR could bring seem very promising.  Establishing collaborations with teams of specialists could open up some amazing learning experiences for learners in the future.

Consider the items below for example.


 

Virtual reality bringing artifacts to life in London — from thestar.com

Excerpt:

Virtual reality has been increasingly making its way into the museum experience. At London’s National History Museum, visitors can currently experience First Life, a 15-minute VR experience also using the Samsung Gear VR headsets, in which David Attenborough narrates a 3D journey depicting sea creatures from 500 million years ago.

Only a few major museums have hosted their own VR experiences, but several projects are working on bringing the museum experience to the masses using the technology.

Google recently launched its Expeditions project, allowing students equipped with a Cardboard headset and a smartphone to view materials from major institutions such as the American Museum of Natural History.

 

vr-british-museum-2015

 

 

Virtual Reality Takes British Museum Visitors to Bronze Age — from augmentedrealitytrends.com by

Excerpt:

British Museum visitors will now be able to experience the past in a more vivid way. Apart from seeing the historical artifacts, you will be taken to the Bronze Age with the help of virtual reality headsets. You can don a Samsung Gear VR headset and explore a customary roundhouse from the Bronze Age. The exploration will consist of 3D scans of objects that are present in the museum. The museum is launching a VR weekend where this experience can be obtained.

Recreation of Three Bronze Age Objects
Three Bronze Age objects have been digitally recreated and will be shown to the visitors through virtual reality headsets.

One is a gold object which was recently discovered and is still caked with mud. The second object is a splendid bronze Beaune Dirk, which is a princely dagger. However, its shape suggests that it was not meant to be used, as the blade was never sharpened and the end was also not attached to a wooden hilt. The third object is a twist of bronze which looks simple but is most enigmatic. Over 50 such loops have been found within 18 miles of Brighton. They were not found anywhere else in Europe. What they were actually, is still a mystery. Users will be invited to try a replica and provide their opinions.

 

Virtual reality takes Baltimore students to Amazon — from wbaltv.com by Omar Jimenez
Armistead Gardens students use Alchemy Learning’s experiential learning

Excerpt:

Tech start up Alchemy Learning is using virtual reality as a tool for education.

“The ability to have a truly experiential learning moment has not been possible in traditional online education. Whereas with virtual reality you can see the students really are in a different world. They’re able to truly experience what’s around them,” Alchemy Learning co-founder Win Smith said.

 

VR-AlchemyLearning2015

 

 

Adopting Virtual Reality for Education — from alchemylearning.com

“As an educator with 20+ years’ experience integrating technology into curriculum, it is exciting for me to see a technology that so quickly captures the attention of the students, motivates them to make the effort to learn the procedures, and then opens them up to the relevant content.“

-Larry Fallon,
Instructional Technology Coordinator,
Arlington County Public Schools

 

What might our learning ecosystems look like by 2025? [Christian]

This posting can also be seen out at evoLLLution.com (where LLL stands for lifelong learning):

DanielChristian-evoLLLutionDotComArticle-7-31-15

 

From DSC:
What might our learning ecosystems look like by 2025?

In the future, learning “channels” will offer more choice, more control.  They will be far more sophisticated than what we have today.

 

MoreChoiceMoreControl-DSC

 

That said, what the most important aspects of online course design end up being 10 years from now depends upon what types of “channels” I think there will be and what might be offered via those channels. By channels, I mean forms, methods, and avenues of learning that a person could pursue and use. In 2015, some example channels might be:

  • Attending a community college, a college or a university to obtain a degree
  • Obtaining informal learning during an internship
  • Using social media such as Twitter or LinkedIn
  • Reading blogs, books, periodicals, etc.

In 2025, there will likely be new and powerful channels for learning that will be enabled by innovative forms of communications along with new software, hardware, technologies, and other advancements. For examples, one could easily imagine:

  • That the trajectory of deep learning and artificial intelligence will continue, opening up new methods of how we might learn in the future
  • That augmented and virtual reality will allow for mobile learning to the Nth degree
  • That the trend of Competency Based Education (CBE) and microcredentials may be catapulted into the mainstream via the use of big data-related affordances

Due to time and space limitations, I’ll focus here on the more formal learning channels that will likely be available online in 2025. In that environment, I think we’ll continue to see different needs and demands – thus we’ll still need a menu of options. However, the learning menu of 2025 will be more personalized, powerful, responsive, sophisticated, flexible, granular, modularized, and mobile.

 


Highly responsive, career-focused track


One part of the menu of options will focus on addressing the demand for more career-focused information and learning that is available online (24×7). Even in 2015, with the U.S. government saying that 40% of today’s workers now have ‘contingent’ jobs and others saying that percentage will continue climbing to 50% or more, people will be forced to learn quickly in order to stay marketable.  Also, the 1/2 lives of information may not last very long, especially if we continue on our current trajectory of exponential change (vs. linear change).

However, keeping up with that pace of change is currently proving to be out of reach for most institutions of higher education, especially given the current state of accreditation and governance structures throughout higher education as well as how our current teaching and learning environment is set up (i.e., the use of credit hours, 4 year degrees, etc.).  By 2025, accreditation will have been forced to change to allow for alternative forms of learning and for methods of obtaining credentials. Organizations that offer channels with a more vocational bent to them will need to be extremely responsive, as they attempt to offer up-to-date, highly-relevant information that will immediately help people be more employable and marketable. Being nimble will be the name of the game in this arena. Streams of content will be especially important here. There may not be enough time to merit creating formal, sophisticated courses on many career-focused topics.

 

StreamsOfContent-DSC

 

With streams of content, the key value provided by institutions will be to curate the most relevant, effective, reliable, up-to-date content…so one doesn’t have to drink from the Internet’s firehose of information. Such streams of content will also offer constant potential, game-changing scenarios and will provide a pulse check on a variety of trends that could affect an industry. Social-based learning will be key here, as learners contribute to each other’s learning. Subject Matter Experts (SMEs) will need to be knowledgeable facilitators of learning; but given the pace of change, true experts will be rare indeed.

Microcredentials, nanodegrees, competency-based education, and learning from one’s living room will be standard channels in 2025.  Each person may have a web-based learner profile by then and the use of big data will keep that profile up-to-date regarding what any given individual has been learning about and what skills they have mastered.

For example, even currently in 2015, a company called StackUp creates their StackUp Report to add to one’s resume or grades, asserting that their services can give “employers and schools new metrics to evaluate your passion, interests, and intellectual curiosity.” Stackup captures, categorizes, and scores everything you read and study online. So they can track your engagement on a given website, for example, and then score the time spent doing so. This type of information can then provide insights into the time you spend learning.

Project teams and employers could create digital playlists that prospective employees or contractors will have to advance through; and such teams and employers will be watching to see how the learners perform in proving their competencies.

However, not all learning will be in the fast lane and many people won’t want all of their learning to be constantly in the high gears. In fact, the same learner could be pursuing avenues in multiple tracks, traveling through their learning-related journeys at multiple speeds.

 


The more traditional liberal arts track


To address these varied learning preferences, another part of the menu will focus on channels that don’t need to change as frequently.  The focus here won’t be on quickly-moving streams of content, but the course designers in this track can take a bit more time to offer far more sophisticated options and activities that people will enjoy going through.

Along these lines, some areas of the liberal arts* will fit in nicely here.

*Speaking of the liberal arts, a brief but important tangent needs to be addressed, for strategic purposes. While the following statement will likely be highly controversial, I’m going to say it anyway.  Online learning could be the very thing that saves the liberal arts.

Why do I say this? Because as the price of higher education continues to increase, the dynamics and expectations of learners continue to change. As the prices continue to increase, so do peoples’ expectations and perspectives. So it may turn out that people are willing to pay a dollar range that ends up being a fraction of today’s prices. But such greatly reduced prices won’t likely be available in face-to-face environments, as offering these types of learning environment is expensive. However, such discounted prices can and could be offered via online-based environments. So, much to the chagrin of many in academia, online learning could be the very thing that provides the type of learning, growth, and some of the experiences that liberal arts programs have been about for centuries. Online learning can offer a lifelong supply of the liberal arts.

But I digress…
By 2025, a Subject Matter Expert (SME) will be able to offer excellent, engaging courses chocked full of the use of:

  • Engaging story/narrative
  • Powerful collaboration and communication tools
  • Sophisticated tracking and reporting
  • Personalized learning, tech-enabled scaffolding, and digital learning playlists
  • Game elements or even, in some cases, multiplayer games
  • Highly interactive digital videos with built-in learning activities
  • Transmedia-based outlets and channels
  • Mobile-based learning using AR, VR, real-world assignments, objects, and events
  • …and more.

However, such courses won’t be able to be created by one person. Their sophistication will require a team of specialists – and likely a list of vendors, algorithms, and/or open source-based tools – to design and deliver this type of learning track.

 


Final reflections


The marketplaces involving education-related content and technologies will likely look different. There could be marketplaces for algorithms as well as for very granular learning modules. In fact, it could be that modularization will be huge by 2025, allowing digital learning playlists to be built by an SME, a Provost, and/or a Dean (in addition to the aforementioned employer or project team).  Any assistance that may be required by a learner will be provided either via technology (likely via an Artificial Intelligence (AI)-enabled resource) and/or via a SME.

We will likely either have moved away from using Learning Management Systems (LMSs) or those LMSs will allow for access to far larger, integrated learning ecosystems.

Functionality wise, collaboration tools will still be important, but they might be mind-blowing to us living in 2015.  For example, holographic-based communications could easily be commonplace by 2025. Where tools like IBM’s Watson, Microsoft’s Cortana, Google’s Deepmind, and Apple’s Siri end up in our future learning ecosystems is hard to tell, but will likely be there. New forms of Human Computer Interaction (HCI) such as Augmented Reality (AR) and Virtual Reality (VR) will likely be mainstream by 2025.

While the exact menu of learning options is unclear, what is clear is that change is here today and will likely be here tomorrow. Those willing to experiment, to adapt, and to change have a far greater likelihood of surviving and thriving in our future learning ecosystems.

 

What does ‘learning’ have to learn from Netflix? — from donaldclarkplanb.blogspot.com by Donald Clark

Excerpts:

Of course, young people are watching way less TV these days, TV is dying, and when they do watch stuff, it’s streamed, at a time that suits them. Education has to learn from this. I’m not saying that we need to replace all of our existing structures but moving towards understanding what the technology can deliver and what learners want (they shape each other) is worth investigation. Hence some reflections on Netflix.

Areas discussed:

  • Timeshifting
  • Data driven delivery — Netflix’ recommendations engine
  • Data driven content
  • Content that’s accessible via multiple kinds of devices
  • Going global

 

From DSC:
I just wanted to add a few thoughts here:

  1. The areas of micro-credentials, nano-degrees, services like stackup.net, big data, etc. may come to play a role with what Donald is talking about here.
  2. I appreciate Donald’s solid, insightful perspectives and his thinking out loud — some great thoughts in that posting (as usual)
  3. Various technologies seem to be making progress as we move towards a future where learning platforms will be able to deliver a personalized learning experience; as digital learning playlists and educationally-related recommendation engines become more available/sophisticated, highly-customized learning experiences should be within reach.
  4. At a recent Next Generation Learning Spaces Conference, one of the speakers stated, “People are control freaks — so let them have more control.”  Along these lines…ultimately, what makes this vision powerful is having more choice, more control.

 

 

MoreChoiceMoreControl-DSC

 

 

 

Also, some other graphics come to my mind:

 

MakingTVMorePersonal-V-NetTV-April2014

 

EducationServiceOfTheFutureApril2014

 

 

 

From DSC:
Wow!

The two apps listed below could represent a new, major, door-opening concept! Such a concept could usher in an era that provides many new affordances — affordances made possible by the presence of beacon-related technologies (or similar machine-to-machine communications).  If this is/becomes the case, we should expect to see a whole new wave of this kind of application, especially as wearables gain traction.


 

This startup is using iBeacon to make In Real Life (IRL) networking more efficient — from technical.ly by Brady Dale
A pair of Amazon and Fab.com alums are working on a new way to answer the age-old question:

“Who should I meet here and are they relevant to my background?”

 

MeetTheRightPeopleViaLINK-March2015

Excerpt:

We’ve all been frustrated by the notion of networking. You walk into a meetup full of people you don’t know, but you do know something about them. Say, for example, it’s a meetup of developers. You know that lots of people there are devs, but what kind? And just because it’s an Android devs meetup, for example, that doesn’t mean that’s what they all actually do. It also doesn’t mean that there aren’t a bunch of other people circulating who are simply trying to network but aren’t actually developers.

The best you can do is start walking up to people and asking — but often your pick is a miss. If you have an objective when you go networking, how can you improve your chances of finding the people you want to find? One startup that’s working on a solution is Surf Labs. The new, two-person company, cofounded by Amazon and Fab.com alums, has released two iOS products that could make it more likely that you will find the person you are looking for.

 

Surf Labs’ first product is called LiNK…and their second product is called LiNK Spaces.

 

 

LINKSpaces-March2015

 

MCxLiNK Spaces - Mockup 1
A rendering of how a LiNK Spaces dashboard could work in a coworking space.
(Image courtesy of Surf Labs)

 

 

Addendum on 3/16/15:

 

Beacons-at-2015-SXSW

 

MicrosoftProductivityVision2015

 

Example snapshots from
Microsoft’s Productivity Future Vision

 

 

MicrosoftProductivityVision2-2015

 

MicrosoftProductivityVision3-2015

 

MicrosoftProductivityVision5-2015

 

MicrosoftProductivityVision6-2015

 

MicrosoftProductivityVision7-2015

 

MicrosoftProductivityVision8-2015

 

MicrosoftProductivityVision4-2015

 

 

 

From DSC:
Check out some of the functionality in these solutions. Then imagine if these solutions were in the size of an entire wall in a classroom or in a corporate L&D facility. Whew!

  • Some serious opportunities for collaboration would arise for remote learners –as well as those located in the face-to-face setting
  • What new affordances would be present for those teaching in K-12, higher ed, or trainers working within the training/learning and development fields? Conversations/discussions would be recorded — to be picked up at the next session. In the meantime, learners could review the discussions at their own pace.
  • What if all of this were possible in your future Smart/Connected TV?
  • I’m also talking here about a vendor that could offer solutions that K-12 systems and institutions of higher ed could afford; some of the solutions below have much of what I’m envisioning here, but are out of the price range. Or the product is multitouch and fairly large, but it doesn’t offer the collaborative features of some of the other products here.

 


 

mezzanine-feb-2015

 


 

Feb2015-AstecSenseTable-InteractiveDisplay

 

 


ideum-feb2015

 

ideumPresenter-feb2015

Ideum’s touch walls come close to what I’m talking about in this posting. If they could add some functionality for seeing/bringing in/collaborating with remote learners — as found in Mezzanine — then that would be great!

Also see:

 

Also see bluescape — but these excellent, innovative solutions are out of the price range for most K-12 and higher ed institutions:

 

bluescape-1-feb-2015

 

I ran across a few items that may tie in with my Learning from the Living [Class] Room vision:


 

A very interesting concept at Stackup.net (@ScoreReporting)

Score everything you read + learn online. Use the StackUp Report to prove to employers that your dedication and interest make you the ideal candidate.

 

TheStackUpReport-Jan2015

 

From DSC:
This is big data — but this time, it’s being applied to the world of credentials. Is this part of how one will obtain employment in the future?  Will it work along with services like Beansprock (see below)?

 

 

 

Then there was spreecast: An interactive video platform that connects people:

 

spreecast-jan2015

 

 

From DSC:
A related item to these concepts:

 

The Future of Lifelong Learning — from knewton.com

 

The Future of Lifelong Learning

Created by Knewton and Knewton

 

 

 

—————

 

 

AI software that could score you the perfect job — from wired.com by Davey Alba

Excerpt:

Today, little more than two years later, Levy and Smith are launching a website called Beansprock that uses natural language processing and machine learning techniques to match you with a suitable job. In other words, they’re applying artificial intelligence to the career hunt. “It’s all about helping the job hunter find the best job in the job universe,” Levy says.

.

beansprock-feb2015

 

Internet access as vital as devices to boosting the learning experience — from thejournal.com by Dian Schaffhauser

Excerpt:

The project reported five key findings:

  • Access to a computing device and the Internet resulted in “greater student engagement in learning.” Eighty percent of the students said having the tablet “made learning more fun and interesting”; 72 percent reported they “were more engaged in their lessons.”
  • Sixty percent of students said they did more reading and writing during the school year because of having a tablet. In fact, teachers assigned more reading and writing homework because they knew home Internet access was available. “This resulted in increased reading and writing fluency, which is especially important for English Language Learners,” the report noted.
  • Internet access shot up. Nearly 80 percent of students reported accessing the Internet on a daily basis in fifth grade, up from 4 percent in fourth grade.
  • Students became more independent learners. Almost all said they “used their tablet regularly to look up information on the Internet when they had a question about something.”
  • With professional development, teachers changed their instructional practices. According to the report, this was evident “by the level of integration of the tablets into everyday instruction” and by “the new project based learning orientation within the classes.”

But the greatest difference overall, the researchers stated, occurred because students could access online information anytime, anywhere. That access “transformed the classroom environment by allowing both students and teachers to bring additional resources into the learning process, at just the right moment to have the greatest impact on learning.”

 
© 2024 | Daniel Christian