Imagine what learning could look like w/ the same concepts found in Skreens!


From DSC:
Imagine what learning could look like w/ the same concepts found in the
Skreens kickstarter campaign?  Where you can use your mobile device to direct what you are seeing and interacting with on the larger screen?  Hmmm… very interesting indeed! With applications not only in the home (and on the road), but also in the active classroom, the boardroom, and the training room.


See
Skreens.com
&
Learning from the Living [Class] Room


 

DanielChristian-AVariationOnTheSkreensTheme-9-29-15

 

 

Skreens-Sept2015Kickstarter

 

Skreens2-Sept2015Kickstarter

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

From DSC:
Some of the phrases and concepts that come to my mind:

  • tvOS-based apps
  • Virtual field trips while chatting or videoconferencing with fellow learners about that experience
  • Virtual tutoring
  • Global learning for K-12, higher ed, the corporate world
  • Web-based collaborations and communications
  • Ubiquitous learning
  • Transmedia
  • Analytics / data mining / web-based learner profiles
  • Communities of practice
  • Lifelong learning
  • 24×7 access
  • Reinvent
  • Staying relevant
  • More choice. More control.
  • Participation.
  • MOOCs — or what they will continue to morph into
  • Second screens
  • Mobile learning — and the ability to quickly tie into your learning networks
  • Ability to contact teachers, professors, trainers, specialists, librarians, tutors and more
  • Language translation
  • Informal and formal learning, blended learning, active learning, self-directed learning
  • The continued convergence of the telephone, the television, and the computer
  • Cloud-based apps for learning
  • Flipping the classroom
  • Homeschooling
  • Streams of content
  • …and more!

 

 

 

 

Addendum:

Check out this picture from Meet the winners of #RobotLaunch2015

Packed house at WilmerHale for the Robot Launch 2015 judging – although 2/3rds of the participants were attending and pitching remotely via video and web conferencing.

 

Discovr Labs brings Virtual Reality to the classroom, lets teachers see what students see  — from techcrunch.com by Greg Kumparak

Excerpt (emphasis DSC):

As consumers, we tend to focus on how virtual reality will work in our homes — the new types of games it allows, the insane 360-degree cinematic experiences, etc.  Some of VR’s greatest potential, though, lays not at home, but in the classroom.

Discovr Labs has built an interface and technology to help teachers use VR as a teaching tool. After the student straps on their headset, Discovr allows the teacher to select which module the student is interacting with, and to see exactly what the student sees; everything from the headset is beamed, wirelessly, to an all-seeing interface.

 

For now, Discovr is focusing on a local experience, with all of the students being in the same room as the teacher. Moving forward, they envision remote experiences where students and their teachers can come together in VR experiences regardless of their physical location.

 

Also see:

 

discovrlabs-sept2015

 

discovrlabs2-sept2015

 

 

 

 

A somewhat related addendum on 9/24/15:

Virtual Zeno Robot – The Future of Augmented Reality in Education — from virtual-strategy.com

Excerpt:

Digital Elite developed a new line of low-cost head mounted augmented reality paper viewers specifically for the education market. A number of novel applications in the field of robotics, virtual physics and a unique book for autism is already readily supported by the viewer. More Apps in the pipeline are being developed to support other immersive and virtual experiences.

In a scientific study the new viewers have compared favorably against expensive VR headsets, such as the Samsung/Oculus Gear VR and Zeiss VR One. The viewers were also tested in a number of real-life education scenarios and Apps. One example is a virtual robot teaching physics and geography deployed in an Augmented Reality (AR) application to break down the final frontier between physical robots and their virtual counterparts. Results are being published in conferences in Hong Kong today and Korea later this month.

 

 

Addendum on 9/25/15:

  • Five emerging trends for innovative tech in education — from jisc.ac.uk by Matt Ramirez
    No longer simply future-gazing, technologies like augmented and virtual reality (AR/VR) are becoming firmly accepted by the education sector for adding value to learning experiences.
 

From DSC:
The following app looks to be something that the “makers” of the programming world might be interested in. Also, it could be great for those learners interested in engineering, robotics, and/or machine-to-machine communications; using this app could help start them down a very promising path/future.

 

Tickle-Sept2015

From their website:

Programming re-imagined for the connected world.
Learn to program Arduino, drones, robots, connected toys, and smart home devices, all wirelessly. Tickle is easy to learn, fun to use, and 1000x more powerful!

Easy to learn, yet incredibly powerful!
Start learning programming the same way as Computer Science courses at Harvard and UC Berkeley. Tickle is used by makers and designers around the world to create custom robots and interactive projects, including Stanford University computer scientists.

Spark your imagination.
Create stunning games and interactive stories using our library of animated characters and sounds. With the ability to program devices to interact with other devices and virtual characters, the possibilities are limitless.

1000X the super (programming) power.
If you’ve tried Scratch, Hopscotch, Tynker, Blockly, Scratch Jr, Kodable, Pyonkee, mBlock, or Code.org, now you can go beyond the screen and program your own connected future.

 

The multi-device world is evolving! — from upsidelearning.com by Amit Garg

Excerpt:

The multi-device world is certainly evolving. Existing devices are evolving in size while new devices are being added to the mix. All of this is resulting in evolving behavior patterns.

The smart thing to do is to get started with multi-device learning without waiting for the world to settle down. You might make a few mistakes, but will learn along the way and would still be better off as compared to those who never got started.

 

.

Smartphone Screen Size

 

“…the boundaries between tablets and smartphones are blurring…”

 

Now we’re talking! One step closer! “The future of TV is apps.” — per Apple’s CEO, Tim Cook

OneStepCloser-DanielChristian-Sept2015

 

From DSC:
We’ll also be seeing the integration of the areas listed below with this type of “TV”-based OS/platform:

  • Artificial Intelligence (AI)
  • Data mining and analytics
  • Learning recommendation engines
  • Digital learning playlists
  • New forms of Human Computer Interfaces (HCI)
  • Intelligent tutoring
  • Social learning / networks
  • Videoconferencing with numerous other learners from across the globe
  • Virtual tutoring, virtual field trips, and virtual schools
  • Online learning to the Nth degree
  • Web-based learner profiles
  • Multimedia (including animations, simulations, and more)
  • Advanced forms of digital storytelling
  • and, most assuredly, more choice & more control.

Competency-based education and much lower cost alternatives could also be possible with this type of learning environment. The key will be to watch — or better yet, to design and create — what becomes of what we’re currently calling the television, and what new affordances/services the “TV” begins to offer us.

 

MoreChoiceMoreControl-DSC

 

 

 

From Apple’s website:

Apple Brings Innovation Back to Television with The All-New Apple TV
The App Store, Siri Remote & tvOS are Coming to Your Living Room

Excerpt:

SAN FRANCISCO — September 9, 2015 — Apple® today announced the all-new Apple TV®, bringing a revolutionary experience to the living room based on apps built for the television. Apps on Apple TV let you choose what to watch and when you watch it. The new Apple TV’s remote features Siri®, so you can search with your voice for TV shows and movies across multiple content providers simultaneously.

The all-new Apple TV is built from the ground up with a new generation of high-performance hardware and introduces an intuitive and fun user interface using the Siri Remote™. Apple TV runs the all-new tvOS™ operating system, based on Apple’s iOS, enabling millions of iOS developers to create innovative new apps and games specifically for Apple TV and deliver them directly to users through the new Apple TV App Store™.

tvOS is the new operating system for Apple TV, and the tvOS SDK provides tools and APIs for developers to create amazing experiences for the living room the same way they created a global app phenomenon for iPhone® and iPad®. The new, more powerful Apple TV features the Apple-designed A8 chip for even better performance so developers can build engaging games and custom content apps for the TV. tvOS supports key iOS technologies including Metal™, for detailed graphics, complex visual effects and Game Center, to play and share games with friends.

 

Addendum on 9/11/15:

 

From DSC:
On 9/9/15, Apple held their September Event 2015. Below are some of the articles/items that capture some of the announcements from that event.


From Apple.com:

 

 

AppleAnnouncements-9-9-15

 

Apple Watch Finally Getting Native Apps, New Features Next Week — from wsj.com by Nathan Olivarez-Giles

Excerpt:

Native apps will arrive on the Apple Watch as a part of a major update known as WatchOS 2. The promise is that native apps will be able to run faster on the watch, though the watch isn’t free of the iPhone yet. The Apple Watch doesn’t have it’s own cellular connection, so any information an app wants to pull from the Internet (like live sports scores, transit times, or messages from friends) it’ll still rely on an iPhone to pull all that off.

Still, the promise of any added speed here is significant. Unlike smartphone apps which are built to consume your attention for minutes at a time, Apple Watch apps are built to get you in and out with the information you need in a matter of seconds.

 

Developers can register now to apply for a new Apple TV hardware ahead of general release, supplies limited — from 9to5mac.com

 

Apple unveils next generation iPhone 6s, 6s Plus — from by Jason Cipriani
The tech giant introduced a new touch-sensitive technology called 3D Touch to its next generation smartphones.

Excerpt:

Apple added pressure-sensitive technology to its new iPhones, which the company is calling 3D Touch. The technology can launch features and perform a variety of functions when varying levels of force are applied to the screen.

 

 


A side comment from DSC for those students involved with graphic design, digital media, and/or with communications:
Study the work that the folks did who were in charge of presenting the complex, technical information — but doing so in a way that was extremely polished, engaging, and professionally done. The videos, for example, were very well done. My hats off to these extremely creative folks — they are clearly at the top of their industry.


 

 

 

Addendum on 9/11/15:

 

HBX Intros HBX Live Virtual Classroom — from campustechnology.com by Rhea Kelly

Excerpt:

Harvard Business School‘s HBX digital learning initiative today launched a virtual classroom designed to reproduce the intimacy and synchronous interaction of the case method in a digital environment. With HBX Live, students from around the world can log in concurrently to participate in an interactive discussion in real time, guided by an HBS professor.

Built to mimic the amphitheater-style seating of an HBS classroom, the HBX Live Studio features a high-resolution video wall that can display up to 60 participants. Additional students can audit sessions via an observer model. An array of stationary and roaming cameras capture the action, allowing viewers to see both the professor and fellow students.

 

HBX Live

HBX Live’s virtual amphitheater
(PRNewsFoto/Harvard Business School)

 

Also see HBX Live in Action

I think that this type of setup could also be integrated with a face-to-face classroom as well (given the right facilities). The HBX Live concept fits right into a piece of my vision entitled, “Learning from the Living [Class] Room.”

Several words/phrases comes to mind:

  • Convenience. I don’t have to travel to another city, state, country. That type of convenience and flexibility is the basis of why many learners take online-based courses in the first place.
  • Global — learning from people of different cultures, races, backgrounds, life experiences.
  • The opportunities are there to increase one‘s cultural awareness.
  • HBX Live is innovative; in fact, Harvard is upping it’s innovation game yet again — showing a firm grasp/display of understanding that they realize that the landscape of higher education is changing and that institutions of traditional higher education need to adapt.
  • Harvard is willing to experiment and to identify new ways to leverage technologies — taking advantage of the affordances that various technologies offer.

BTW, note how the use of teams is a requirement here.

 

HBXLive-8-26-2015

 

 

Also see:

Harvard Business School really has created the classroom of the future — from fortune.com by  John A. Byrne

Excerpt:

Anand, meantime, faces the images of 60 students portrayed on a curved screen in front of him, a high-resolution video wall composed of more than 6.2 million pixels that mimics the amphitheater-style seating of a class HBS tiered classroom

 

What might our learning ecosystems look like by 2025? [Christian]

This posting can also be seen out at evoLLLution.com (where LLL stands for lifelong learning):

DanielChristian-evoLLLutionDotComArticle-7-31-15

 

From DSC:
What might our learning ecosystems look like by 2025?

In the future, learning “channels” will offer more choice, more control.  They will be far more sophisticated than what we have today.

 

MoreChoiceMoreControl-DSC

 

That said, what the most important aspects of online course design end up being 10 years from now depends upon what types of “channels” I think there will be and what might be offered via those channels. By channels, I mean forms, methods, and avenues of learning that a person could pursue and use. In 2015, some example channels might be:

  • Attending a community college, a college or a university to obtain a degree
  • Obtaining informal learning during an internship
  • Using social media such as Twitter or LinkedIn
  • Reading blogs, books, periodicals, etc.

In 2025, there will likely be new and powerful channels for learning that will be enabled by innovative forms of communications along with new software, hardware, technologies, and other advancements. For examples, one could easily imagine:

  • That the trajectory of deep learning and artificial intelligence will continue, opening up new methods of how we might learn in the future
  • That augmented and virtual reality will allow for mobile learning to the Nth degree
  • That the trend of Competency Based Education (CBE) and microcredentials may be catapulted into the mainstream via the use of big data-related affordances

Due to time and space limitations, I’ll focus here on the more formal learning channels that will likely be available online in 2025. In that environment, I think we’ll continue to see different needs and demands – thus we’ll still need a menu of options. However, the learning menu of 2025 will be more personalized, powerful, responsive, sophisticated, flexible, granular, modularized, and mobile.

 


Highly responsive, career-focused track


One part of the menu of options will focus on addressing the demand for more career-focused information and learning that is available online (24×7). Even in 2015, with the U.S. government saying that 40% of today’s workers now have ‘contingent’ jobs and others saying that percentage will continue climbing to 50% or more, people will be forced to learn quickly in order to stay marketable.  Also, the 1/2 lives of information may not last very long, especially if we continue on our current trajectory of exponential change (vs. linear change).

However, keeping up with that pace of change is currently proving to be out of reach for most institutions of higher education, especially given the current state of accreditation and governance structures throughout higher education as well as how our current teaching and learning environment is set up (i.e., the use of credit hours, 4 year degrees, etc.).  By 2025, accreditation will have been forced to change to allow for alternative forms of learning and for methods of obtaining credentials. Organizations that offer channels with a more vocational bent to them will need to be extremely responsive, as they attempt to offer up-to-date, highly-relevant information that will immediately help people be more employable and marketable. Being nimble will be the name of the game in this arena. Streams of content will be especially important here. There may not be enough time to merit creating formal, sophisticated courses on many career-focused topics.

 

StreamsOfContent-DSC

 

With streams of content, the key value provided by institutions will be to curate the most relevant, effective, reliable, up-to-date content…so one doesn’t have to drink from the Internet’s firehose of information. Such streams of content will also offer constant potential, game-changing scenarios and will provide a pulse check on a variety of trends that could affect an industry. Social-based learning will be key here, as learners contribute to each other’s learning. Subject Matter Experts (SMEs) will need to be knowledgeable facilitators of learning; but given the pace of change, true experts will be rare indeed.

Microcredentials, nanodegrees, competency-based education, and learning from one’s living room will be standard channels in 2025.  Each person may have a web-based learner profile by then and the use of big data will keep that profile up-to-date regarding what any given individual has been learning about and what skills they have mastered.

For example, even currently in 2015, a company called StackUp creates their StackUp Report to add to one’s resume or grades, asserting that their services can give “employers and schools new metrics to evaluate your passion, interests, and intellectual curiosity.” Stackup captures, categorizes, and scores everything you read and study online. So they can track your engagement on a given website, for example, and then score the time spent doing so. This type of information can then provide insights into the time you spend learning.

Project teams and employers could create digital playlists that prospective employees or contractors will have to advance through; and such teams and employers will be watching to see how the learners perform in proving their competencies.

However, not all learning will be in the fast lane and many people won’t want all of their learning to be constantly in the high gears. In fact, the same learner could be pursuing avenues in multiple tracks, traveling through their learning-related journeys at multiple speeds.

 


The more traditional liberal arts track


To address these varied learning preferences, another part of the menu will focus on channels that don’t need to change as frequently.  The focus here won’t be on quickly-moving streams of content, but the course designers in this track can take a bit more time to offer far more sophisticated options and activities that people will enjoy going through.

Along these lines, some areas of the liberal arts* will fit in nicely here.

*Speaking of the liberal arts, a brief but important tangent needs to be addressed, for strategic purposes. While the following statement will likely be highly controversial, I’m going to say it anyway.  Online learning could be the very thing that saves the liberal arts.

Why do I say this? Because as the price of higher education continues to increase, the dynamics and expectations of learners continue to change. As the prices continue to increase, so do peoples’ expectations and perspectives. So it may turn out that people are willing to pay a dollar range that ends up being a fraction of today’s prices. But such greatly reduced prices won’t likely be available in face-to-face environments, as offering these types of learning environment is expensive. However, such discounted prices can and could be offered via online-based environments. So, much to the chagrin of many in academia, online learning could be the very thing that provides the type of learning, growth, and some of the experiences that liberal arts programs have been about for centuries. Online learning can offer a lifelong supply of the liberal arts.

But I digress…
By 2025, a Subject Matter Expert (SME) will be able to offer excellent, engaging courses chocked full of the use of:

  • Engaging story/narrative
  • Powerful collaboration and communication tools
  • Sophisticated tracking and reporting
  • Personalized learning, tech-enabled scaffolding, and digital learning playlists
  • Game elements or even, in some cases, multiplayer games
  • Highly interactive digital videos with built-in learning activities
  • Transmedia-based outlets and channels
  • Mobile-based learning using AR, VR, real-world assignments, objects, and events
  • …and more.

However, such courses won’t be able to be created by one person. Their sophistication will require a team of specialists – and likely a list of vendors, algorithms, and/or open source-based tools – to design and deliver this type of learning track.

 


Final reflections


The marketplaces involving education-related content and technologies will likely look different. There could be marketplaces for algorithms as well as for very granular learning modules. In fact, it could be that modularization will be huge by 2025, allowing digital learning playlists to be built by an SME, a Provost, and/or a Dean (in addition to the aforementioned employer or project team).  Any assistance that may be required by a learner will be provided either via technology (likely via an Artificial Intelligence (AI)-enabled resource) and/or via a SME.

We will likely either have moved away from using Learning Management Systems (LMSs) or those LMSs will allow for access to far larger, integrated learning ecosystems.

Functionality wise, collaboration tools will still be important, but they might be mind-blowing to us living in 2015.  For example, holographic-based communications could easily be commonplace by 2025. Where tools like IBM’s Watson, Microsoft’s Cortana, Google’s Deepmind, and Apple’s Siri end up in our future learning ecosystems is hard to tell, but will likely be there. New forms of Human Computer Interaction (HCI) such as Augmented Reality (AR) and Virtual Reality (VR) will likely be mainstream by 2025.

While the exact menu of learning options is unclear, what is clear is that change is here today and will likely be here tomorrow. Those willing to experiment, to adapt, and to change have a far greater likelihood of surviving and thriving in our future learning ecosystems.

 

From DSC:
Swivl allows faculty members, teachers, trainers, and other Subject Matter Experts (SMEs) to make recordings where the recording device swivels to follow the SME (who is holding/wearing a remote). Recordings can be automatically sent to the cloud for further processing/distribution.

My request to you is:
Can you extend the Swivl app to not only provide recordings, but to provide rough draft transcripts of those recordings as well?

This could be very helpful for accessibility reasons, but also to provide students/learners with a type of media that they prefer (video, audio, and/or text).

 

Swivl-2015

 

 

From DSC:
A note to Double Robotics — can you work towards implementing M2M communications?

That is, when our Telepresence Robot from Double Robotics approaches a door, we need for sensors on the door and on the robot to initiate communications with each other in order for the door to open (and to close after __ seconds). If a security clearance code is necessary, the remote student or the robot needs to be able to transmit the code.

When the robot approaches an elevator, we need for sensors near the elevator and on the robot to initiate communications with each other in order for the elevator to be summoned and then for the elevator’s doors to open. We then need for the remote student/robot to be able to tell the elevator which floor it wants to go to and to close the elevator door (if if hasn’t already done so).

Such scenarios imply that we need:

  1. Industry standards for such communications
  2. Standards that include security clearances (i.e., “OK, I’ll let you into this floor of this public building.” or “No, I can’t open up this door without you sending me a security clearance code.”)
  3. Secure means of communications

 

DoubleRobotics-Feb2014

 

 

 

doublerobotics.com -- wheels for your iPad

 

 

A major request/upgrade might also include:

  • The ability for the Double App on the iPad to make recordings, have an auto send option to send those recordings to the cloud, and then provide transcripts for those recordings. This could be very helpful for accessibility reasons, but also to provide students/learners with a type of media that they prefer (video, audio, and/or text).

 

 

 

IRIS.TV Finds Adaptive Video Personalization Increases Consumption by 50% — from appmarket.tv by Richard Kastelein

Excerpt (emphasis DSC):

IRIS.TV, the leading in-player video recommendation engine, reports that its programmatic video delivery technology, Adaptive StreamTM, has increased video consumption by 50% across all its clients. The results of these findings just further solidifies that consumption of online video is significantly enhanced by a personalized user experience.

By integrating Adaptive StreamTM in their video players and mobile apps, IRIS.TV’s clients are able to deliver the most relevant streams of video to their viewers, just like TV. Yet unlike TV, viewers’ feedback is captured in real-time through interactive buttons, allowing the stream to dynamically adapt to the changing preferences.

 

IRIS-dot-TV-Julne2015

 

Press Release: IRIS.TV Launches Personalized End-screen for Online Video with Kaltura — from iris.tv
IRIS.TV Partners with Kaltura to offer programmatic content delivery and in-player thumbnail recommendations

Excerpt:

Los Angeles, May 4, 2015 – IRIS.TV the leading programmatic content delivery system, today announced a new dynamic, personalized end-screen plugin for Kaltura, provider of the leading video technology platform.

IRIS.TV’s new plugin for Kaltura will offer clients of both a personalized and dynamic stream of video along with a personalized end-screen framework. Publishers can now provide users with both dynamic streams of video – powered by consumption and interaction – along with thumbnail choices specific to their real-time consumption habits. This new integration supplies publishers with additional tools to deliver a more personalized viewing experience in order to maximize viewer retention and video views. The partnership is aimed to help consumers discover relevant and engaging content while viewing across all connected devices.

 

From DSC:
Now imagine these same concepts of providing recommendation engines and personalized/dynamic/interactive streams of content, but this time apply those same concepts to delivering personalized, digital learning playlists on topics that you want to learn about. With the pace of change and a shrinking 1/2 life for many pieces of information, this could be a powerful way to keep abreast of any given topic. Team these concepts up with the idea of learning hubs — whereby some of the content is delivered electronically and some of the content is discussed/debated in a face-to-face manner — and you have some powerful, up-to-date opportunities for lifelong learning. Web-based learner profiles and services like Stack Up could continually populate one’s resume and list of skills — available to whomever you choose to make it available to.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

StreamsOfContent-DSC

 

 

Along these lines, also see:

  • Nearly 1 billion TV sets Internet connected by 2020 — from appmarket.tv
    Excerpt:

    The number of TV sets connected to the Internet will reach 965 million by 2020, up from 103 million at end-2010 and the 339 million expected at end-2014, according to a new report from Digital TV Research.Covering 51 countries, the Connected TV Forecasts report estimates that the proportion of TV sets connected to the Internet will rocket to 30.4% by 2020, up from only 4.2% at end-2010 and the 12.1% expected by end-2014. South Korea (52.7%) will have the highest proportion by 2020, followed by the UK (50.6%), Japan (48.6%) and the US (47.0%). – See more at: http://www.appmarket.tv/connected-tv/2572-nearly-1-billion-tv-sets-internet-connected-by-2020.html#sthash.BJWdCgbv.dpuf

 

  • McDonnell – HTML5 is the true Second Screen, Social TV winner — from appmarket.tv
    Excerpt:
    After years of evolution, the W3C has finally declared the HTML5 standard complete. When Steve Jobs “declared war on Flash” he gave HTML5 a fighting chance of dominance. In parallel, businesses started to recognise the potential of Social TV or “Second Screen” behaviour to re-invigorate old media and drive revenue to newer social platforms like Twitter. The ensuing debate centred on winners and losers, but with such a diverse global broadcasting market and no social network with dominance in all countries, could the web standard be the ultimate winner? I think it already is.

 

 

Augmented Reality – hype, or the future? — from mybroadband.co.za by EE Publishers
The concept of Augmented Reality has existed for many years now – and commentators have remained rigidly sceptical about whether it will truly materialise in our everyday lives.

Excerpts:

Essentially, AR describes the way in which a device (such as a smartphone camera, wearables like Google Glasses, or a videogame motion sensor) uses an application to “see” the real world around us, and overlay augmented features onto that view.

These augmented features are aimed at adding value to the user’s experience of their physical environment.

We predict there will be three primary areas in which we will initially engage with AR:

The way we consume information: Imagine going to a library or museum and being able to have a “conversation” with key people throughout history, or instantly transform the world around to you resemble a bygone era.

The fields of product development, marketing and customer engagement: Overlaying new digital services into retail environments opens up a host of new possibilities for organisations to tailor products to their consumers.

Back office/operational functions within the organisation: Companies are already finding ways to use AR in their supply chain, logistics and warehousing environments. Find out, for example, if a particular unit is missing from a shelf.

 

 

From DSC:
I wonder how Machine-to-Machine communications, beacons, GPS, and more will come into play here…? The end result, I think is a connection between the physical world and the digital world:

 

DanielChristian-Combining-Digital-Physical-Worlds-Oct2014

 

 

AdobeCreativeCloud2015

 

Some resources on this announcement:

  • Adobe Unveils Milestone 2015 Creative Cloud Release — from adobe.com
    Excerpt:
    At the heart of Creative Cloud is Adobe CreativeSync, a signature technology that intelligently syncs creative assets: files, photos, fonts, vector graphics, brushes, colors, settings, metadata and more. With CreativeSync, assets are instantly available, in the right format, wherever designers need them – across desktop, web and mobile apps. Available exclusively in Creative Cloud, CreativeSync means work can be kicked off in any connected Creative Cloud mobile app or CC desktop tool; picked up again later in another; and finished in the designer’s favorite CC desktop software..

  • Adobe updates Creative Cloud in milestone 2015 release — from creativebloq.com
    Powerful updates to Photoshop CC, Illustrator CC, Premiere Pro CC and InDesign CC; new mobile apps for iOS and Android and more. Here’s everything you need to know.

  • Adobe launches Adobe Stock, included in Creative Cloud, as well as a stand-alone service — from talkingnewmedia.com by D.B. Hebbard
    Pricing for CC customers is $9.99 for a single image; $29.99 per month for 10 images monthly; and $199 per month for 750 images monthly
    Excerpt:
    [On 6/15/15] Adobe has launched Adobe Stock, its new stock photography service. It is now included in CC and will appear as one of the five top menu items in the CC app (Home, Apps, Assets, Stock and Community). Many will have noticed the update to the app that came through yesterday.
    .
  • Adobe launches radical new stock image service — from creativebloq.com
    Excerpt:
    Adobe has launched Adobe Stock, a new service that simplifies the process of buying and using stock content, including photos, illustrations and vector graphics. Part of the milestone 2015 Creative Cloud release announced this morning, Adobe Stock is a curated collection of 40 million high-quality photos, vector graphics and illustrations. The aim? To help creatives jump-start their projects.

    Photographers and designers can also contribute work to Adobe Stock. Adobe says it will offer industry-leading rates, while giving creatives access to a global community of stock content buyers.
    .
  • Adobe Illustrator CC is now 10 times faster — from creativebloq.com
    .
  • The best new features in Adobe Photoshop CC — from creativebloq.com

Adobe Photoshop CC

 

 

Do you see what I see? Smart glasses, VR, and telepresence robots — from arstechnica.com by Megan Geuss
Heightened reality will hit industry and gaming before it changes anyone’s day-to-day.

 

 

 

Oculus VR unveils the version of Oculus Rift you’ll actually buy — from mashable.com by JP Mangalindan

Excerpt:

Oculus VR finally debuted the long-awaited consumer version of Oculus Rift, the virtual reality headset, at a media event in San Francisco on Thursday [6/11/15].

“For the first time we’ll finally be on the inside of the game,” Oculus CEO Brendan Iribe said onstage. “Gamers have been dreaming of this. We’ve all been dreaming of this for decades.”

Oculus Touch

 

 

Virtual reality apps market set to explode — from netguide.co.nz by

Excerpt:

Augmented Reality (AR) apps in the mobile games market will generate 420 million downloads annually by 2019, up from 30 million in 2014, according to Juniper Research’s research titled Augmented Reality: Consumer, Enterprise and Vehicles 2015-2019.

The emergence of Head Mounted Devices (HMDs) used in the home, such as Microsoft’s Hololens, will bring a surge in interest for AR games over the next five years, according to Juniper.

For the time being however, most AR downloads will occur via smartphones and tablets.

 

 

What the Surreal Vision acquisition means for Oculus — from fortune.com by  John Gaudiosi
Oculus now has the technology to blend augmented reality with virtual reality.

Excerpt:

Oculus VR last week acquired Surreal Vision, a company creating real-time 3D scene reconstruction technology that will allow users to move around the room and interact with real-world objects while immersed in VR.

 

 

Microsoft pulls back curtain on Surface hub collaboration screen — from by Shira Ovide

Excerpt:

Microsoft announced on Wednesday [6/10/15] the price tag for a piece of audio-visual equipment that it first showed off in January. Surface Hub, which will cost up to $20,000 for a model with an 84-inch screen, is like the merger of a high-end video conference system, electronic whiteboard and Xbox.

The product plunges Microsoft headlong into competition with Cisco and other traditional providers of conference room audio-visual systems.

Microsoft is pitching Surface Hub as the best audio-video conference
equipment and collaboration tool a company can buy. It costs up to $20,000.
[From DSC: There will also be a $7,000, 55-inch version].

 

 

Bluescape launches new hardware program with MultiTaction, Planar Systems, and 3M — from Bluescape
Bluescape Showcases MultiTaction’s and Planar’s Interactive Displays Running Its Visual Collaboration Software at Booth #1690 at InfoComm 2015

Excerpt:

SAN CARLOS, CA–(Jun 15, 2015) – Bluescape, a persistent cloud-based platform for real-time visual collaboration, today announced the new Bluescape Hardware Program. Companies in the program offer hardware that complements the Bluescape experience and has been extensively tested and validated to work well with Bluescape’s platform. As collaboration spans across an entire enterprise, Bluescape strives to support a range of hardware options to allow an organization’s choice in hardware to fit different workspaces. The first three companies are market-leading interactive display vendors MultiTaction, Planar, and 3M.

MultiTaction, a leading developer of interactive display systems, offers advanced tracking performance that identifies fingers, hands, objects, 2D bar codes and IR pens. The unparalleled responsiveness of MultiTaction’s systems scales to an unlimited number of concurrent users and the displays are highly customizable to fit any existing corporate space. MultiTaction’s advanced interactive hardware combined with Bluescape’s software allows teams to connect content and people in one place, enabling deeper insights, meaningful innovation, and simultaneous collaboration across global time zones.

 

BlueScape-2015

 

 

 

 

 

 

New report lays out its 2040 vision of the workplace of the future — from workplaceinsight.net b

Excerpt:

By 2040 knowledge workers will decide where and how they want to work, according to a new report on the workplace of the future by Johnson Controls’ Global Workplace Solutions business. The Smart Workplace 2040 report claims that 25 years from now, work will be seen as something workers do, rather than a place to which they commute. According to the study, work patterns will be radically different as  a new generation of what it terms ‘workspace consumers’ choose their time and place of work. Most workers will frequently work from home, and will choose when to visit work hubs to meet and network with others. There will be no set hours and the emphasis will be on getting work done, while workers’ wellness will take priority. Technology will bring together networks of individuals who operate in an entrepreneurial way, with collaboration the major driver of business performance.

Also see:
Smart Workplace 2040: The rise of the workspace consumer — from JohnsonControls.com

SmartWorkPlace-2040-RiseOfWorkspaceConsumer-June2015

Executive Summary
Corporate organizations are still considering the workplace as delivering a strong identity and more than ever as a marketing weapon, creating and sustaining their corporate identity. The intensity of performance level improvements increased significantly over the past ten years, accelerating the pace of work through the combined power of technology and personalized, choice-based software solutions.

The presence of technology in every aspect of our life in 2040 is predominating our way of living. The workplace of 2040 is far more agile, the presence of technology is ultra predominant and human beings are highly reliant on it. Yet the technology is “shy”, not intrusive, transparent, and highly reliable – no failure is neither, not a possibility nor an option:

  • The home (the Hive) is a hyper connected and adaptive, responsive to the environment and its users, supporting multiple requirements simultaneously
  • Complex software applications will suggest to users what they should do to maximise performance, not miss any important deadlines, and make sure she allocates enough time for important tasks that are not necessarily urgent
  • End user services are autonomous, proactive and designed around enhancing the user experience
  • In an ubiquitously networked world, true offline time is both a luxury and a necessity. Being physically present is perceived as more authentic, a privilege
  • Adaptive white noise technology makes it possible to have a first rate telepresence session in an open environment
  • The whole DIY movement is experiencing a tremendous boost – people are literally building their own products, bought through their smartphones using mobile web applications, and printed on demand
  • Lower costs of energy and unmanned vehicles in combination with high costs of owning your own vehicle, plus high parking costs in densely populated megacities, benefits new sharing regimes.

In the context of the new world of work in 2040, we are contemplating a new world of work in an Eco Campus:

CHOICE: deciding where and how we want to work
ADAPTABILITY: adapting our working pattern to meet private needs and family constraints
WORK: entrepreneurship is the norm
LOCATION: deciding who we work with and how
WORKPLACE: access to “Trophy Workplaces” so going to the “office” is a luxury, a reward
SERVICES: offering real time services, catering to peak demands
WELLNESS: privileging wellness over work
NETWORK: reliance on an extremely wide network of experts to carry out our work

 

While Corporate Soldiers still remain major actors in organisations, we are seeing a significant rise of entrepreneurial behaviours, transforming employees into a new breed of workers focused on achieving great results through their work activities as well as achieving wellbeing in their private life. Going to “work” is therefore accessible through a complex model of locations, spread across an Eco Campus, accessible less than 20 miles away from home…

 
© 2024 | Daniel Christian