Predictions 2018: Technology, Media, and Telecommunications –from deloitte.com

The technology, media and entertainment, and telecommunications ecosystem remains as fascinating as ever in 2018. Will augmented reality become mainstream? How will machine learning affect the enterprise? What’s the future of the smartphone? Deloitte Global invites you to read the latest Predictions report, designed to provide insight into transformation and growth opportunities over the next one to five years.

 

 

 

 

 

TV is (finally) an app: The goods, the bads and the uglies for learning — from thejournal.com by Cathie Norris, Elliot Soloway

Excerpts:

Television. TV. There’s an app for that. Finally! TV — that is, live shows such as the news, specials, documentaries (and reality shows, if you must) — is now just like Candy Crunch and Facebook. TV apps (e.g., DirecTV Now) are available on all devices — smartphones, tablets, laptops, Chromebooks. Accessing streams upon streams of videos is, literally, now just a tap away.

Plain and simple: readily accessible video can be a really valuable resource for learners and learning.

Not everything that needs to be learned is on video. Instruction will need to balance the use of video with the use of printed materials. That balance, of course, needs to take in cost and accessibility.

Now for the 800 pound gorilla in the room: Of course, that TV app could be a huge distraction in the classroom. The TV app has just piled yet another classroom management challenge onto a teacher’s back.

That said, it is early days for TV as an app. For example, HD (High Definition) TV demands high bandwidth — and we can experience stuttering/skipping at times. But, when 5G comes around in 2020, just two years from now, POOF, that stuttering/skipping will disappear. “5G will be as much as 1,000 times faster than 4G.”  Yes, POOF!

 

From DSC:
Learning via apps is here to stay. “TV” as apps is here to stay. But what’s being described here is but one piece of the learning ecosystem that will be built over the next 5-15 years and will likely be revolutionary in its global impact on how people learn and grow. There will be opportunities for social-based learning, project-based learning, and more — with digital video being a component of the ecosystem, but is and will be insufficient to completely move someone through all of the levels of Bloom’s Taxonomy.

I will continue to track this developing learning ecosystem, but voice-driven personal assistants are already here. Algorithm-based recommendations are already here. Real-time language translation is already here.  The convergence of the telephone/computer/television continues to move forward.  AI-based bots will only get better in the future. Tapping into streams of up-to-date content will continue to move forward. Blockchain will likely bring us into the age of cloud-based learner profiles. And on and on it goes.

We’ll still need teachers, professors, and trainers. But this vision WILL occur. It IS where things are heading. It’s only a matter of time.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from phys.org by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.

 

Also related/see:

 

  • TeachLive.org
    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.

 

 

 

 

 

This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.

 

This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.

 

 


From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.

 

I’m tracking these developments at:
http://danielschristian.com/thelivingclassroom/

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


*  Also see:


Blockchain, Bitcoin and the Tokenization of Learning — from edsurge.com by Sydney Johnson

Excerpt:

In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.

 

 

 

 

 

 

 

 



Also see:



 

Everything Apple Announced — from wired.comby Arielle Pardes

Excerpt:

To much fanfare, Apple CEO Tim Cook unveiled the next crop of iPhones [on 9/12/17] at the new Steve Jobs Theater on Apple’s new headquarters in Cupertino. With the introduction of three new phones, Cook made clear that Apple’s premiere product is very much still evolving. The iPhone X, he said, represents “the future of smartphones”: a platform for augmented reality, a tool for powerful computing, screen for everything. But it’s not all about the iPhones. The event also brought with it a brand new Apple Watch, upgrades to Apple TV, and a host of other features coming to the Apple ecosystem this fall. Missed the big show? Check out our archived live coverage of Apple’s big bash, and read all the highlights below.

 

 

iPhone Event 2017 — from techcrunch.com

From DSC:
A nice listing of articles that cover all of the announcements.

 

 

Apple Bets on Augmented Reality to Sell Its Most Expensive Phone — from bloomberg.com by Alex Webb and Mark Gurman

Excerpt:

Apple Inc. packed its $1,000 iPhone with augmented reality features, betting the nascent technology will persuade consumers to pay premium prices for its products even as cheaper alternatives abound.

The iPhone X, Apple’s most expensive phone ever, was one of three new models Chief Executive Officer Tim Cook showed off during an event at the company’s new $5 billion headquarters in Cupertino, California, on Tuesday. It also rolled out an updated Apple Watch with a cellular connection and an Apple TV set-top box that supports higher-definition video.

Augmented Reality
Apple executives spent much of Tuesday’s event describing how AR is at the core of the new flagship iPhone X. Its new screen, 3-D sensors, and dual cameras are designed for AR video games and other more-practical uses such as measuring digital objects in real world spaces. Months before the launch, Apple released a tool called ARKit that made it easier for developers to add AR capabilities to their apps.

These technologies have never been available in consumer devices and “solidify the platform on which Apple will retain and grow its user base for the next decade,” Gene Munster of Loup Ventures wrote in a note following Apple’s event.

The company is also working on smart glasses that may be AR-enabled, people familiar with the plan told Bloomberg earlier this year.

 

 

Meet the iPhone X, Apple’s New High-End Handset — from wired.com by David Pierce

Excerpt:

First of all, the X looks like no other phone. It doesn’t even look like an iPhone. On the front, it’s screen head to foot, save for a small trapezoidal notch taken out of the top where Apple put selfie cameras and sensors. Otherwise, the bezel around the edge of the phone has been whittled to near-nonexistence and the home button disappeared—all screen and nothing else. The case is made of glass and stainless steel, like the much-loved iPhone 4. The notched screen might take some getting used to, but the phone’s a stunner. It goes on sale starting at $999 on October 27, and it ships November 3.

If you can’t get your hands on an iPhone X in the near future, Apple still has two new models for you. The iPhone 8 and 8 Plus both look like the iPhone 7—with home buttons!—but offer a few big upgrades to match the iPhone X. Both new models support wireless charging, run the latest A11 Bionic processor, and have 2 gigs of RAM. They also have glass backs, which gives them a glossy new look. They don’t have OLED screens, but they’re getting the same TrueTone tech as the X, and they can shoot video in 4K.

 

 

Apple Debuts the Series 3 Apple Watch, Now With Cellular — from wired.com by David Pierce

 

 

 

Ikea and Apple team up on augmented reality home design app — from curbed.com by Asad Syrkett
The ‘Ikea Place’ app lets shoppers virtually test drive furniture

 

 

The New Apple iPhone 8 Is Built for Photography and Augmented Reality — from time.com by Alex Fitzpatrick

Excerpt:

Apple says the new iPhones are also optimized for augmented reality, or AR, which is software that makes it appear that digital images exist in the user’s real-world environment. Apple SVP Phil Schiller demonstrated several apps making use of AR technology, from a baseball app that shows users player statistics when pointing their phone at the field to a stargazing app that displays the location of constellations and other celestial objects in the night sky. Gaming will be a major use case for AR as well.

Apple’s new iPhone 8 and iPhone 8 plus will have wireless charging as well. Users will be able to charge the device by laying it down on a specially-designed power mat on their desk, bedside table or inside their car, similar to how the Apple Watch charges. (Competing Android devices have long had a similar feature.) Apple is using the Qi wireless charging standard for the iPhones.

 

 

Why you shouldn’t unlock your phone with your face — from medium.com by Quincy Larson

Excerpt:

Today Apple announced its new FaceID technology. It’s a new way to unlock your phone through facial recognition. All you have to do is look at your phone and it will recognize you and unlock itself. At time of writing, nobody outside of Apple has tested the security of FaceID. So this article is about the security of facial recognition, and other forms of biometric identification in general.

Historically, biometric identification has been insecure. Cameras can be tricked. Voices can be recorded. Fingerprints can be lifted. And in many countries?—?including the US?—?the police can legally force you to use your fingerprint to unlock your phone. So they can most certainly point your phone at your face and unlock it against your will. If you value the security of your data?—?your email, social media accounts, family photos, the history of every place you’ve ever been with your phone?—?then I recommend against using biometric identification.

Instead, use a passcode to unlock your phone.

 

 

The iPhone lineup just got really compleX  — from techcrunch.com by Josh Constine

 

 

 

 

 

Apple’s ‘Neural Engine’ Infuses the iPhone With AI Smarts — from wired.com by Tom Simonite

Excerpt:

When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would “set the path for technology for the next decade.” Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors.

That feature is the “neural engine,” part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

Chip experts say the neural engine could become central to the future of the iPhone as Apple moves more deeply into areas such as augmented reality and image recognition, which rely on machine-learning algorithms. They predict that Google, Samsung, and other leading mobile-tech companies will soon create neural engines of their own. Earlier this month, China’s Huawei announced a new mobile chip with a dedicated “neural processing unit” to accelerate machine learning.

 

 

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

Top trends from InfoComm 2017 — from inavateonthenet.net
AV over IP and huddle rooms are two key takeaways from InfoComm as Paul Milligan wraps up the 2017 show.

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 

What’s New for Video and Audio (April 2017) | Adobe Creative Cloud

 

 

 

Adobe Creative Cloud Propels Video Forward at NAB 2017 — from news.adobe.com
Latest Release Features New Capabilities in AI, VR, Motion Graphics, Live Animation and Audio

Excerpt:

SAN JOSE, Calif.–(BUSINESS WIRE)–Ahead of the National Association of Broadcasting (NAB) conference, Adobe (Nasdaq:ADBE) today announced a major update for video in Adobe Creative Cloud to help filmmakers and video producers collaborate and streamline video workflows. The Creative Cloud release, available today, delivers new features for graphics and titling, animation, polishing audio and sharing assets; support for the latest video formats, such as HDR, VR and 4K; new integrations with Adobe Stock; and advanced artificial intelligence capabilities powered by Adobe Sensei. Announced at Adobe Summit 2017, Adobe Experience Cloud also allows brands to deliver connected video experiences across any screen at massive scale, while analyzing performance and monetizing ads.

Technology advancements and exploding consumer demand for impactful and personalized content require video producers to create, deliver and monetize their video assets faster than ever before. From the largest studio to next generation YouTubers, a scalable, end-to-end solution is required to create, collaborate and streamline video workflows with robust analytics and advertising tools to optimize content and drive more value.

 

 

Adobe Makes Big Leaps in Video, Just in Time for NAB — from blogs.adobe.com

Excerpt:

Next week, thousands of broadcasters, video producers and digital content lovers will gather for the National Association of Broadcasters (NAB) annual conference. Just in time for the event, Adobe is unveiling big updates to our video tools for graphics and titling, animation, and sharing assets; support for the latest formats including HDR, VR and 4K; lots of improvements to video workflows; and more power from Adobe Sensei, our artificial intelligence technology. It’s all part of a major Adobe CC product update available today.

“The newest Creative Cloud video release integrates the advanced science of Adobe Sensei to make common tasks faster and easier. All video producers – whether they’re part of the major media companies or up and coming YouTubers – can now bring their creative vision to life without having to be motion graphics or audio experts,” says Steven Warner, vice president of digital media at Adobe.

 

 

 

These are the latest features in After Effects CC 2017, available now — from provideocoalition.com by Mark Christiansen
Get up to date and up to speed with these additions & changes

[For a detailed overview, check back during NAB when the course After Effects CC 2017: New Features from LinkedIn Learning (otherwise known as Lynda.com) will be updated with everything that’s brand new as of today. This course will feature the examples depicted here in step-by-step detail.]

 

 

Adobe updates Premiere Pro CC for April 2017 — from provideocoalition.com by Scott Simmons
And instead of waiting months for the new CC versions they should be available soon, as in probably today

 

 

 

After Effects NAB 2017 Update — from provideocoalition.com by Chris and Trish Meyer
How to play nice(r) with Premiere Pro editors, as well as other updates

 

 



 Also see:



 

 

 

Mixed reality is coming in 2017! Here’s what you need to know — from linkedin.com by Keith Curtin

Excerpts:

A hybrid of both AR & VR, Mixed Reality (MR) is far more advanced than Virtual Reality because it combines the use of several types of technologies including sensors, advanced optics and next gen computing power. All of this technology bundled into a single device will provide the user with the capability to overlay augmented holographic digital content into your real-time space, creating scenarios that are unbelievably realistic and mind-blowing.

How does it work?
Mixed Reality works by scanning your physical environment and creating a 3D map of your surroundings so the device will know exactly where and how to place digital content into that space – realistically – while allowing you to interact with it using gestures. Much different than Virtual Reality where the user is immersed in a totally different world, Mixed Reality experiences invite digital content into your real-time surroundings, allowing you to interact with them.

Mixed reality use cases mentioned in the article included:

  • Sports
  • Music
  • TV
  • Art
  • Fashion
  • Business
  • Education
  • Medicine
  • Interior design
  • Retail
  • Construction
  • Real estate

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian