From DSC:
Swivl allows faculty members, teachers, trainers, and other Subject Matter Experts (SMEs) to make recordings where the recording device swivels to follow the SME (who is holding/wearing a remote). Recordings can be automatically sent to the cloud for further processing/distribution.

My request to you is:
Can you extend the Swivl app to not only provide recordings, but to provide rough draft transcripts of those recordings as well?

This could be very helpful for accessibility reasons, but also to provide students/learners with a type of media that they prefer (video, audio, and/or text).

 

Swivl-2015

 

 

From DSC:
A note to Double Robotics — can you work towards implementing M2M communications?

That is, when our Telepresence Robot from Double Robotics approaches a door, we need for sensors on the door and on the robot to initiate communications with each other in order for the door to open (and to close after __ seconds). If a security clearance code is necessary, the remote student or the robot needs to be able to transmit the code.

When the robot approaches an elevator, we need for sensors near the elevator and on the robot to initiate communications with each other in order for the elevator to be summoned and then for the elevator’s doors to open. We then need for the remote student/robot to be able to tell the elevator which floor it wants to go to and to close the elevator door (if if hasn’t already done so).

Such scenarios imply that we need:

  1. Industry standards for such communications
  2. Standards that include security clearances (i.e., “OK, I’ll let you into this floor of this public building.” or “No, I can’t open up this door without you sending me a security clearance code.”)
  3. Secure means of communications

 

DoubleRobotics-Feb2014

 

 

 

doublerobotics.com -- wheels for your iPad

 

 

A major request/upgrade might also include:

  • The ability for the Double App on the iPad to make recordings, have an auto send option to send those recordings to the cloud, and then provide transcripts for those recordings. This could be very helpful for accessibility reasons, but also to provide students/learners with a type of media that they prefer (video, audio, and/or text).

 

 

 

IRIS.TV Finds Adaptive Video Personalization Increases Consumption by 50% — from appmarket.tv by Richard Kastelein

Excerpt (emphasis DSC):

IRIS.TV, the leading in-player video recommendation engine, reports that its programmatic video delivery technology, Adaptive StreamTM, has increased video consumption by 50% across all its clients. The results of these findings just further solidifies that consumption of online video is significantly enhanced by a personalized user experience.

By integrating Adaptive StreamTM in their video players and mobile apps, IRIS.TV’s clients are able to deliver the most relevant streams of video to their viewers, just like TV. Yet unlike TV, viewers’ feedback is captured in real-time through interactive buttons, allowing the stream to dynamically adapt to the changing preferences.

 

IRIS-dot-TV-Julne2015

 

Press Release: IRIS.TV Launches Personalized End-screen for Online Video with Kaltura — from iris.tv
IRIS.TV Partners with Kaltura to offer programmatic content delivery and in-player thumbnail recommendations

Excerpt:

Los Angeles, May 4, 2015 – IRIS.TV the leading programmatic content delivery system, today announced a new dynamic, personalized end-screen plugin for Kaltura, provider of the leading video technology platform.

IRIS.TV’s new plugin for Kaltura will offer clients of both a personalized and dynamic stream of video along with a personalized end-screen framework. Publishers can now provide users with both dynamic streams of video – powered by consumption and interaction – along with thumbnail choices specific to their real-time consumption habits. This new integration supplies publishers with additional tools to deliver a more personalized viewing experience in order to maximize viewer retention and video views. The partnership is aimed to help consumers discover relevant and engaging content while viewing across all connected devices.

 

From DSC:
Now imagine these same concepts of providing recommendation engines and personalized/dynamic/interactive streams of content, but this time apply those same concepts to delivering personalized, digital learning playlists on topics that you want to learn about. With the pace of change and a shrinking 1/2 life for many pieces of information, this could be a powerful way to keep abreast of any given topic. Team these concepts up with the idea of learning hubs — whereby some of the content is delivered electronically and some of the content is discussed/debated in a face-to-face manner — and you have some powerful, up-to-date opportunities for lifelong learning. Web-based learner profiles and services like Stack Up could continually populate one’s resume and list of skills — available to whomever you choose to make it available to.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

StreamsOfContent-DSC

 

 

Along these lines, also see:

  • Nearly 1 billion TV sets Internet connected by 2020 — from appmarket.tv
    Excerpt:

    The number of TV sets connected to the Internet will reach 965 million by 2020, up from 103 million at end-2010 and the 339 million expected at end-2014, according to a new report from Digital TV Research.Covering 51 countries, the Connected TV Forecasts report estimates that the proportion of TV sets connected to the Internet will rocket to 30.4% by 2020, up from only 4.2% at end-2010 and the 12.1% expected by end-2014. South Korea (52.7%) will have the highest proportion by 2020, followed by the UK (50.6%), Japan (48.6%) and the US (47.0%). – See more at: http://www.appmarket.tv/connected-tv/2572-nearly-1-billion-tv-sets-internet-connected-by-2020.html#sthash.BJWdCgbv.dpuf

 

  • McDonnell – HTML5 is the true Second Screen, Social TV winner — from appmarket.tv
    Excerpt:
    After years of evolution, the W3C has finally declared the HTML5 standard complete. When Steve Jobs “declared war on Flash” he gave HTML5 a fighting chance of dominance. In parallel, businesses started to recognise the potential of Social TV or “Second Screen” behaviour to re-invigorate old media and drive revenue to newer social platforms like Twitter. The ensuing debate centred on winners and losers, but with such a diverse global broadcasting market and no social network with dominance in all countries, could the web standard be the ultimate winner? I think it already is.

 

 

Augmented Reality – hype, or the future? — from mybroadband.co.za by EE Publishers
The concept of Augmented Reality has existed for many years now – and commentators have remained rigidly sceptical about whether it will truly materialise in our everyday lives.

Excerpts:

Essentially, AR describes the way in which a device (such as a smartphone camera, wearables like Google Glasses, or a videogame motion sensor) uses an application to “see” the real world around us, and overlay augmented features onto that view.

These augmented features are aimed at adding value to the user’s experience of their physical environment.

We predict there will be three primary areas in which we will initially engage with AR:

The way we consume information: Imagine going to a library or museum and being able to have a “conversation” with key people throughout history, or instantly transform the world around to you resemble a bygone era.

The fields of product development, marketing and customer engagement: Overlaying new digital services into retail environments opens up a host of new possibilities for organisations to tailor products to their consumers.

Back office/operational functions within the organisation: Companies are already finding ways to use AR in their supply chain, logistics and warehousing environments. Find out, for example, if a particular unit is missing from a shelf.

 

 

From DSC:
I wonder how Machine-to-Machine communications, beacons, GPS, and more will come into play here…? The end result, I think is a connection between the physical world and the digital world:

 

DanielChristian-Combining-Digital-Physical-Worlds-Oct2014

 

 

AdobeCreativeCloud2015

 

Some resources on this announcement:

  • Adobe Unveils Milestone 2015 Creative Cloud Release — from adobe.com
    Excerpt:
    At the heart of Creative Cloud is Adobe CreativeSync, a signature technology that intelligently syncs creative assets: files, photos, fonts, vector graphics, brushes, colors, settings, metadata and more. With CreativeSync, assets are instantly available, in the right format, wherever designers need them – across desktop, web and mobile apps. Available exclusively in Creative Cloud, CreativeSync means work can be kicked off in any connected Creative Cloud mobile app or CC desktop tool; picked up again later in another; and finished in the designer’s favorite CC desktop software..

  • Adobe updates Creative Cloud in milestone 2015 release — from creativebloq.com
    Powerful updates to Photoshop CC, Illustrator CC, Premiere Pro CC and InDesign CC; new mobile apps for iOS and Android and more. Here’s everything you need to know.

  • Adobe launches Adobe Stock, included in Creative Cloud, as well as a stand-alone service — from talkingnewmedia.com by D.B. Hebbard
    Pricing for CC customers is $9.99 for a single image; $29.99 per month for 10 images monthly; and $199 per month for 750 images monthly
    Excerpt:
    [On 6/15/15] Adobe has launched Adobe Stock, its new stock photography service. It is now included in CC and will appear as one of the five top menu items in the CC app (Home, Apps, Assets, Stock and Community). Many will have noticed the update to the app that came through yesterday.
    .
  • Adobe launches radical new stock image service — from creativebloq.com
    Excerpt:
    Adobe has launched Adobe Stock, a new service that simplifies the process of buying and using stock content, including photos, illustrations and vector graphics. Part of the milestone 2015 Creative Cloud release announced this morning, Adobe Stock is a curated collection of 40 million high-quality photos, vector graphics and illustrations. The aim? To help creatives jump-start their projects.

    Photographers and designers can also contribute work to Adobe Stock. Adobe says it will offer industry-leading rates, while giving creatives access to a global community of stock content buyers.
    .
  • Adobe Illustrator CC is now 10 times faster — from creativebloq.com
    .
  • The best new features in Adobe Photoshop CC — from creativebloq.com

Adobe Photoshop CC

 

 

Do you see what I see? Smart glasses, VR, and telepresence robots — from arstechnica.com by Megan Geuss
Heightened reality will hit industry and gaming before it changes anyone’s day-to-day.

 

 

 

Oculus VR unveils the version of Oculus Rift you’ll actually buy — from mashable.com by JP Mangalindan

Excerpt:

Oculus VR finally debuted the long-awaited consumer version of Oculus Rift, the virtual reality headset, at a media event in San Francisco on Thursday [6/11/15].

“For the first time we’ll finally be on the inside of the game,” Oculus CEO Brendan Iribe said onstage. “Gamers have been dreaming of this. We’ve all been dreaming of this for decades.”

Oculus Touch

 

 

Virtual reality apps market set to explode — from netguide.co.nz by

Excerpt:

Augmented Reality (AR) apps in the mobile games market will generate 420 million downloads annually by 2019, up from 30 million in 2014, according to Juniper Research’s research titled Augmented Reality: Consumer, Enterprise and Vehicles 2015-2019.

The emergence of Head Mounted Devices (HMDs) used in the home, such as Microsoft’s Hololens, will bring a surge in interest for AR games over the next five years, according to Juniper.

For the time being however, most AR downloads will occur via smartphones and tablets.

 

 

What the Surreal Vision acquisition means for Oculus — from fortune.com by  John Gaudiosi
Oculus now has the technology to blend augmented reality with virtual reality.

Excerpt:

Oculus VR last week acquired Surreal Vision, a company creating real-time 3D scene reconstruction technology that will allow users to move around the room and interact with real-world objects while immersed in VR.

 

 

Microsoft pulls back curtain on Surface hub collaboration screen — from by Shira Ovide

Excerpt:

Microsoft announced on Wednesday [6/10/15] the price tag for a piece of audio-visual equipment that it first showed off in January. Surface Hub, which will cost up to $20,000 for a model with an 84-inch screen, is like the merger of a high-end video conference system, electronic whiteboard and Xbox.

The product plunges Microsoft headlong into competition with Cisco and other traditional providers of conference room audio-visual systems.

Microsoft is pitching Surface Hub as the best audio-video conference
equipment and collaboration tool a company can buy. It costs up to $20,000.
[From DSC: There will also be a $7,000, 55-inch version].

 

 

Bluescape launches new hardware program with MultiTaction, Planar Systems, and 3M — from Bluescape
Bluescape Showcases MultiTaction’s and Planar’s Interactive Displays Running Its Visual Collaboration Software at Booth #1690 at InfoComm 2015

Excerpt:

SAN CARLOS, CA–(Jun 15, 2015) – Bluescape, a persistent cloud-based platform for real-time visual collaboration, today announced the new Bluescape Hardware Program. Companies in the program offer hardware that complements the Bluescape experience and has been extensively tested and validated to work well with Bluescape’s platform. As collaboration spans across an entire enterprise, Bluescape strives to support a range of hardware options to allow an organization’s choice in hardware to fit different workspaces. The first three companies are market-leading interactive display vendors MultiTaction, Planar, and 3M.

MultiTaction, a leading developer of interactive display systems, offers advanced tracking performance that identifies fingers, hands, objects, 2D bar codes and IR pens. The unparalleled responsiveness of MultiTaction’s systems scales to an unlimited number of concurrent users and the displays are highly customizable to fit any existing corporate space. MultiTaction’s advanced interactive hardware combined with Bluescape’s software allows teams to connect content and people in one place, enabling deeper insights, meaningful innovation, and simultaneous collaboration across global time zones.

 

BlueScape-2015

 

 

 

 

 

 

New report lays out its 2040 vision of the workplace of the future — from workplaceinsight.net b

Excerpt:

By 2040 knowledge workers will decide where and how they want to work, according to a new report on the workplace of the future by Johnson Controls’ Global Workplace Solutions business. The Smart Workplace 2040 report claims that 25 years from now, work will be seen as something workers do, rather than a place to which they commute. According to the study, work patterns will be radically different as  a new generation of what it terms ‘workspace consumers’ choose their time and place of work. Most workers will frequently work from home, and will choose when to visit work hubs to meet and network with others. There will be no set hours and the emphasis will be on getting work done, while workers’ wellness will take priority. Technology will bring together networks of individuals who operate in an entrepreneurial way, with collaboration the major driver of business performance.

Also see:
Smart Workplace 2040: The rise of the workspace consumer — from JohnsonControls.com

SmartWorkPlace-2040-RiseOfWorkspaceConsumer-June2015

Executive Summary
Corporate organizations are still considering the workplace as delivering a strong identity and more than ever as a marketing weapon, creating and sustaining their corporate identity. The intensity of performance level improvements increased significantly over the past ten years, accelerating the pace of work through the combined power of technology and personalized, choice-based software solutions.

The presence of technology in every aspect of our life in 2040 is predominating our way of living. The workplace of 2040 is far more agile, the presence of technology is ultra predominant and human beings are highly reliant on it. Yet the technology is “shy”, not intrusive, transparent, and highly reliable – no failure is neither, not a possibility nor an option:

  • The home (the Hive) is a hyper connected and adaptive, responsive to the environment and its users, supporting multiple requirements simultaneously
  • Complex software applications will suggest to users what they should do to maximise performance, not miss any important deadlines, and make sure she allocates enough time for important tasks that are not necessarily urgent
  • End user services are autonomous, proactive and designed around enhancing the user experience
  • In an ubiquitously networked world, true offline time is both a luxury and a necessity. Being physically present is perceived as more authentic, a privilege
  • Adaptive white noise technology makes it possible to have a first rate telepresence session in an open environment
  • The whole DIY movement is experiencing a tremendous boost – people are literally building their own products, bought through their smartphones using mobile web applications, and printed on demand
  • Lower costs of energy and unmanned vehicles in combination with high costs of owning your own vehicle, plus high parking costs in densely populated megacities, benefits new sharing regimes.

In the context of the new world of work in 2040, we are contemplating a new world of work in an Eco Campus:

CHOICE: deciding where and how we want to work
ADAPTABILITY: adapting our working pattern to meet private needs and family constraints
WORK: entrepreneurship is the norm
LOCATION: deciding who we work with and how
WORKPLACE: access to “Trophy Workplaces” so going to the “office” is a luxury, a reward
SERVICES: offering real time services, catering to peak demands
WELLNESS: privileging wellness over work
NETWORK: reliance on an extremely wide network of experts to carry out our work

 

While Corporate Soldiers still remain major actors in organisations, we are seeing a significant rise of entrepreneurial behaviours, transforming employees into a new breed of workers focused on achieving great results through their work activities as well as achieving wellbeing in their private life. Going to “work” is therefore accessible through a complex model of locations, spread across an Eco Campus, accessible less than 20 miles away from home…

 

Google has shipped over 1 million Cardboard VR units — from techcrunch.com by Darrell Etherington

Excerpt:

Google revealed today that Cardboard has quietly become the leading VR platform in terms of platform reach – over 1 million Cardboard units have shipped to users so far, a 100 percent increase on the 500,000 milestone it announced in December last year.

Cardboard’s progression is a testament to Google’s approach, which favors simplicity and low barriers to entry instead of freaky real verisimilitude and expensive, high-powered hardware. Google first revealed Cardboard at last year’s I/O event, throwing it out there almost as an afterthought and a seeming subtle dig at Facebook’s high-priced acquisition of Oculus VR.

 

 

Google’s Project Brillo is an OS for the home — and a lot more — from computerworld.com by Zach Miners

Excerpt:

Google has made a big play for the Internet of Things, announcing a new OS on Thursday that will connect appliances around the home and allow them to be controlled from an Android smartphone or tablet.

Dubbed Project Brillo, it’s a stripped down version of Google’s Android OS that will run on door locks, ovens, heating systems and other devices that have a small memory footprint, and allow them to communicate and work together.

 

Google takes another shot at mobile payments with Android Pay — from techcrunch.com by Kyle Russell

Excerpt:

At its I/O developer conference today, Google announced Android Pay, a new payments solution native to its mobile operating system. In addition to making it easier to pay at a merchant’s point of sale via NFC, the new system lets merchants integrate payments directly into their apps for selling physical goods and services using an Android Pay API rather than integrating a third-party provider like Venmo or PayPal.

 

Google launches Android M preview with fingerprint scanner support, Android Pay, improved permissions and battery life
— from techcrunch.com by Frederic Lardinois

Excerpt:

As expected, Google today announced the developer preview release of the next version of Android at its I/O developer conference in San Francisco. With Android M (which will get its full name once it’s released to users), Google focuses mostly on fit and finish, but the company also added a number of new features to its mobile operating system. It’s no surprise that Android M won’t feature any major new design elements. The last release, Android Lollipop, introduced Google’s Material Design language, after all, and there are still plenty of developers who haven’t even migrated their apps over.

 

Android M will be able to give you contextual info about what’s happening in your Android Apps — from techcrunch.com by Frederic Lardinois

Excerpt:

Google Now has long helped Android users get timely information about local traffic, movies that are playing locally and other information based on their commutes, browsing history and other data. With Android M, which Google announced today, the Now service is getting even smarter and more contextual. When you tap and hold the home button in Android M, Google will grab the information from the application you are using at that moment and Now will try to give you the right contextual information about what you are looking at in that app. Google Calls This ‘Google Now on Tap.’

 

Chromecast gets autoplay, queuing, second screen and multiplayer game powers — from techcrunch.com by Darrell Etherington

Excerpt:

Google’s Chromecast is a quiet little media secret agent turning the search giant into a big time home entertainment player. All told, users of Cast-enabled software have hit the little button to put their small-screen content up on the big screen a total of 1.5 billion times in the U.S. alone, and Chromecast floats other Google boats, too – users increase their YouTube viewing time by 45 percent on average once they start using the device, for instance.

Chromecast (and Cast-enabled devices, including the Nexus Player and the Nvidia Shield) is about to get more powerful, thanks to a handful of new features announced at I/O this year. These new abilities turn the streamer into a much more robust media device, making it easy to see how Cast could underpin the home theater or media room of the near future. Here’s what Chromecast developers and users can look forward to coming out of this year’s show:

 

Google Play gets more family-friendly with content ratings, filtering by age and interest — from techcrunch.com by Sarah Perez

Excerpt:

In April, Google announced a new developer-facing program called “Designed for Families” which allowed mobile app publishers the option to undergo an additional review in order to be included in a new section focused on kids’ apps within Google Play. Today, the company officially unveiled that section — or sections, as it turns out — at its I/O developer conference.

Parents searching Google’s mobile app store will now be able to tap on a new “Family” button indicated with a green, smiley faced star icon in order to find the family-friendly content across apps, games, movies and TV homepages.

There’s also a “Children’s Books” button on the Books homepage, where parents can also filter the selection by age range and genre.

 

Google’s new Cloud Test Lab lets Android developers quickly test their apps on top Android devices for free — from techcrunch.com by Frederic Lardinois

Excerpt:

Google launched a new project at its I/O developer conference today that will make it easier for developers to check how their mobile apps work on twenty of the most popular Android devices from around the world. Sadly, the service will only roll out to developers later this year, but if you are interested, you can sign up to join the pilot program here.

It’s no secret that the diverse Android ecosystem makes life harder for developers, given that they can’t simply test their apps on a small number of popular devices and assume that everything will run smoothly for all users. Most developers keep a few phones and tablets handy to test their apps on, but few have access to a wide variety of recent devices to test every revision of their apps on.

 

google-io-20150241

 

 

Google Photos breaks free of Google+, now offers free, unlimited storage — from techcrunch.com by Sarah Perez

Excerpt:

Google officially announced its long-rumored revamp of its photo-sharing service, Google Photos, at its I/O developer conference in San Francisco today. The killer feature? Users can now backup up full-resolution photos and videos – up to 16MP for photos and 1080p for videos – to Google’s cloud for free. The service will roll out to Android, iOS and web users starting today, the company says.

The free storage option makes more sense for those with point-and-shoot cameras, and lets you keep a copy of your photos that’s good for your typical printing and photo-sharing needs. However, those with DSLR cameras or who want to store their photos and videos in their original sizes can choose a different plan which taps into your Google Account’s 15 GB of free storage. This is what was available before, and you can add to your storage quota as needed for a fee.

 

Android developer news from Google I/O 2015 — from lynda.com by David Gassner

Excerpt:

The keynote featured a smorgasbord of new technologies and additions/improvements to existing platforms. A stream of presenters followed each other across the stage, each talking about what was new for 2015. They covered Android, Chrome, and Chromebooks, virtual reality, 360-degree camera arrays, a stripped-down version of Android for the internet of things, and many other geeky new toys. Here’s what’s coming to an Android device or Android developer workstation near you.

Google I/O 2015: How context is slowly killing off the mobile app menu — from zdnet.com by Kevin Tofel
Summary: Google’s new Now On Tap feature, coming with Android M, shows a future where you don’t hunt through home screens and menus to find an app. The right apps come to you.

 

From DSC:
In watching the video below, note how this presenter is able to initiate cameras/feeds/effects with a touch of a button (so to speak). What if recording studios could be setup for professors, teachers, and trainers to use like this?  It could be sharp — especially given the movement towards flipping the classroom and implementing active learning based environments.


 

 

 

From DSC:
The articles below illustrate the continued convergence of multiple technologies and means of communication. For example, what we consider “TV” is changing rapidly. As this space changes, I’m looking for new opportunities and affordances that would open up exciting new approaches and avenues for educationally-related learning experiences.


 

Hootsuite and Tagboard team up to power social TV workflow — from adweek.com by Kimberlee Morrison

Excerpt (emphasis DSC):

More and more TV viewers are engaging with second screen devices while they watch broadcasts. A new partnership between Hootsuite and Tagboard hopes to bridge the gap between television and second screen social experiences.

Tagboard is a social media aggregation and curation platform that allows users to manage incoming social media posts for display, either on television broadcasts, or on screens at live events, and Hootsuite is a social media campaign management program. Their partnership enables integration for mutual users for real-time engagement.

 

 

Capture social content on display and TV with Hootsuite and Tagboard — from blog.hootsuite.com

Excerpt:

Adding social content to live TV broadcasts and sports games is a proven way to capture and keep your audience’s attention.

But the process isn’t that easy. For one, digital teams need to ensure that they review each piece of content (to keep it safe for the big screen), and this can create complicated and slow social media workflows.

To help streamline this process, Hootsuite has integrated with Tagboard, an innovative social media display tool, to provide an easy way to capture social content and incorporate it into on-air broadcasts, live event screens, or on digital platforms.

With the Tagboard app for Hootsuite, your team can put relevant and timely social content on air within seconds—when it matters most to the viewer.

KUSA Weather Touchscreen 2 women anchors.png

 

Introducing the Tagboard App for Hootsuite — from blog.tagboard.com
Social TV is easier than ever with Tagboard’s new app for Hootsuite

 

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

What does ‘learning’ have to learn from Netflix? — from donaldclarkplanb.blogspot.com by Donald Clark

Excerpts:

Of course, young people are watching way less TV these days, TV is dying, and when they do watch stuff, it’s streamed, at a time that suits them. Education has to learn from this. I’m not saying that we need to replace all of our existing structures but moving towards understanding what the technology can deliver and what learners want (they shape each other) is worth investigation. Hence some reflections on Netflix.

Areas discussed:

  • Timeshifting
  • Data driven delivery — Netflix’ recommendations engine
  • Data driven content
  • Content that’s accessible via multiple kinds of devices
  • Going global

 

From DSC:
I just wanted to add a few thoughts here:

  1. The areas of micro-credentials, nano-degrees, services like stackup.net, big data, etc. may come to play a role with what Donald is talking about here.
  2. I appreciate Donald’s solid, insightful perspectives and his thinking out loud — some great thoughts in that posting (as usual)
  3. Various technologies seem to be making progress as we move towards a future where learning platforms will be able to deliver a personalized learning experience; as digital learning playlists and educationally-related recommendation engines become more available/sophisticated, highly-customized learning experiences should be within reach.
  4. At a recent Next Generation Learning Spaces Conference, one of the speakers stated, “People are control freaks — so let them have more control.”  Along these lines…ultimately, what makes this vision powerful is having more choice, more control.

 

 

MoreChoiceMoreControl-DSC

 

 

 

Also, some other graphics come to my mind:

 

MakingTVMorePersonal-V-NetTV-April2014

 

EducationServiceOfTheFutureApril2014

 

 

 

NHL-VirtualReality-WatchFromAnySeat-3-14-15

Excerpt:

AUSTIN, TX – Virtual reality is featured prominently at South By Southwest Sports this year, from using it to better train athletes with Oculus Rift to how it could transform the fan experience watching basketball, football and hockey at home.

The NHL had its first successful test of a 360-degree virtual reality experience at its Stadium Series game between the San Jose Sharks and Los Angeles Kings last month, mounting cameras around the glass that filmed HD images in the round.

 

 

NBA-VirtualReality-WatchFromAnySeat-3-14-15

Excerpt:

When basketball lovers aren’t able to trek to stadiums near and far to follow their favorite teams, it’s possible that watching games on a bar’s widescreen TV from behind bowls of wings is the next best thing. This may no longer be true, however, as a wave of court-side, 3D virtual game experiences is becoming available to superfans with Oculus gear.

Earlier this month, NextVR showed off its new enhanced spectator experiences at the 2015 NBA All-Star Technology Summit with virtual reality (VR) footage of an October 2014 Miami Heat and Cleveland Cavaliers match-up in Rio de Janeiro. The NBA also already announced plans to record VR sessions of the NBA All-Star Game, the Foot Locker Three-Point Contest, and the Sprite Slam Dunk event and practice.

 

NEXTVR-March2015

 

 

OculusRift-InSportsSXSW-2015

 

 

 

From DSC:
In the future, will you be able to “pull up a seat” at any lecture — throughout the globe — that you want to?

 

 



 

Alternatively, another experiment might relate to second screening lectures — i.e., listening to the lecture on the main/large screen — in your home or office — and employing social-based learning/networking going on via a mobile device.

Consider this article:

TV-friendly social network Twitter is testing a new Social TV service on iPhones which provides users with content and interaction about only one TV show at a time.

The aim is to give users significantly better engagement with their favourite shows than they presently experience when they follow a live broadcast via a Twitter hashtag.

This radical innovation in Social TV design effectively curates just relevant content (screening out irrelevant tweets that use a show’s hashtag) and presents it in an easy-to-use interface.

If successful, the TV Timeline feature will better position Twitter as it competes with Facebook to partner with the television industry and tap advertising revenue related to TV programming.

 

AppleWatch-3-9-15

 

 

AppleResearchKit-3-9-15

 

 

Also see:

 

RobotsOpenUpWorldOfArt-March2015

 

With a special thanks to Krista Spahr,
Senior Instructional Designer at Calvin College,
for this resource

 

Description:

March 1, 2015 | The De Young, one of San Francisco’s fine art museums, now has two robots that open the museum up to those who cannot attend, including the physically handicapped. John Blackstone reports on the state-of-the-art in museum tour guides, and interviews robotics activist Henry Evans, a former Silicon Valley executive who is now almost completely paralyzed, and who worked with the museum to make touring by robot a reality.

 

MicrosoftProductivityVision2015

 

Example snapshots from
Microsoft’s Productivity Future Vision

 

 

MicrosoftProductivityVision2-2015

 

MicrosoftProductivityVision3-2015

 

MicrosoftProductivityVision5-2015

 

MicrosoftProductivityVision6-2015

 

MicrosoftProductivityVision7-2015

 

MicrosoftProductivityVision8-2015

 

MicrosoftProductivityVision4-2015

 

 

 
© 2025 | Daniel Christian