Reflections on “Are ‘smart’ classrooms the future?” [Johnston]

Are ‘smart’ classrooms the future? — from campustechnology.com by Julie Johnston
Indiana University explores that question by bringing together tech partners and university leaders to share ideas on how to design classrooms that make better use of faculty and student time.

Excerpt:

To achieve these goals, we are investigating smart solutions that will:

  • Untether instructors from the room’s podium, allowing them control from anywhere in the room;
  • Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
  • Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
  • Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
  • Deliver these features through a simple, user-friendly and reliable room/technology interface.

Activities included collaborative brainstorming focusing on these questions:

  • What else can we do to create the classroom of the future?
  • What current technology exists to solve these problems?
  • What could be developed that doesn’t yet exist?
  • What’s next?

 

 

 

From DSC:
Though many peoples’ — including faculty members’ — eyes gloss over when we start talking about learning spaces and smart classrooms, it’s still an important topic. Personally, I’d rather be learning in an engaging, exciting learning environment that’s outfitted with a variety of tools (physically as well as digitally and virtually-based) that make sense for that community of learners. Also, faculty members have very limited time to get across campus and into the classroom and get things setup…the more things that can be automated in those setup situations the better!

I’ve long posted items re: machine-to-machine communications, voice recognition/voice-enabled interfaces, artificial intelligence, bots, algorithms, a variety of vendors and their products including Amazon’s Alexa / Apple’s Siri / Microsoft’s Cortana / and Google’s Home or Google Assistant, learning spaces, and smart classrooms, as I do think those things are components of our future learning ecosystems.

 

 

 

To higher ed: When the race track is going 180mph, you can’t walk or jog onto the track. [Christian]

From DSC:
When the race track is going 180mph, you can’t walk or jog onto the track.  What do I mean by that? 

Consider this quote from an article that Jeanne Meister wrote out at Forbes entitled, “The Future of Work: Three New HR Roles in the Age of Artificial Intelligence:”*

This emphasis on learning new skills in the age of AI is reinforced by the most recent report on the future of work from McKinsey which suggests that as many as 375 million workers around the world may need to switch occupational categories and learn new skills because approximately 60% of jobs will have least one-third of their work activities able to be automated.

Go scan the job openings and you will likely see many that have to do with technology, and increasingly, with emerging technologies such as artificial intelligence, deep learning, machine learning, virtual reality, augmented reality, mixed reality, big data, cloud-based services, robotics, automation, bots, algorithm development, blockchain, and more. 

 

From Robert Half’s 2019 Technology Salary Guide 

 

 

How many of us have those kinds of skills? Did we get that training in the community colleges, colleges, and universities that we went to? Highly unlikely — even if you graduated from one of those institutions only 5-10 years ago. And many of those institutions are often moving at the pace of a nice leisurely walk, with some moving at a jog, even fewer are sprinting. But all of them are now being asked to enter a race track that’s moving at 180mph. Higher ed — and society at large — are not used to moving at this pace. 

This is why I think that higher education and its regional accrediting organizations are going to either need to up their game hugely — and go through a paradigm shift in the required thinking/programming/curricula/level of responsiveness — or watch while alternatives to institutions of traditional higher education increasingly attract their learners away from them.

This is also, why I think we’ll see an online-based, next generation learning platform take place. It will be much more nimble — able to offer up-to-the minute, in-demand skills and competencies. 

 

 

The below graphic is from:
Jobs lost, jobs gained: What the future of work will mean for jobs, skills, and wages

 

 

 


 

* Three New HR Roles To Create Compelling Employee Experiences
These new HR roles include:

  1. IBM: Vice President, Data, AI & Offering Strategy, HR
  2. Kraft Heinz Senior Vice President Global HR, Performance and IT
  3. SunTrust Senior Vice President Employee Wellbeing & Benefits

What do these three roles have in common? All have been created in the last three years and acknowledge the growing importance of a company’s commitment to create a compelling employee experience by using data, research, and predictive analytics to better serve the needs of employees. In each case, the employee assuming the new role also brought a new set of skills and capabilities into HR. And importantly, the new roles created in HR address a common vision: create a compelling employee experience that mirrors a company’s customer experience.

 


 

An excerpt from McKinsey Global Institute | Notes from the Frontier | Modeling the Impact of AI on the World Economy 

Workers.
A widening gap may also unfold at the level of individual workers. Demand for jobs could shift away from repetitive tasks toward those that are socially and cognitively driven and others that involve activities that are hard to automate and require more digital skills.12 Job profiles characterized by repetitive tasks and activities that require low digital skills may experience the largest decline as a share of total employment, from some 40 percent to near 30 percent by 2030. The largest gain in share may be in nonrepetitive activities and those that require high digital skills, rising from some 40 percent to more than 50 percent. These shifts in employment would have an impact on wages. We simulate that around 13 percent of the total wage bill could shift to categories requiring nonrepetitive and high digital skills, where incomes could rise, while workers in the repetitive and low digital skills categories may potentially experience stagnation or even a cut in their wages. The share of the total wage bill of the latter group could decline from 33 to 20 percent.13 Direct consequences of this widening gap in employment and wages would be an intensifying war for people, particularly those skilled in developing and utilizing AI tools, and structural excess supply for a still relatively high portion of people lacking the digital and cognitive skills necessary to work with machines.

 


 

 

Amazon CEO Jeff Bezos’ personal AV tech controls the future of AV and doesn’t even realize it — from ravepubs.com by Gary Kayye

Excerpt:

But it dawned on me today that the future of our very own AV market may not be in the hands of any new product, new technology or even an AV company at all. In fact, there’s likely one AV guy (or girl) out there, today, that controls the future of AV for all of us, but doesn’t even know it, yet.

I’m talking about Jeff Bezos’ personal AV technician. Yes, that Jeff Bezos — the one who started Amazon.

Follow my logic.

The Amazon Alexa is AMAZING. Probably the most amazing thing since Apple’s iPhone. And, maybe even more so. The iPhone was revolutionary as it was a handheld phone, an email client, notes taker, voice recorder, calendar, to-do list, wrist watch and flashlight — all in one. It replaced like 10 things I was using every single day. And I didn’t even mention the camera!

Alexa seamlessly and simply connects to nearly everything you want to connect it to. And, it’s updated weekly — yes, weekly — with behind-the-scenes Friday-afternoon firmware and software upgrades. So, just when you think Alexa doesn’t do something you want it to do, she can — you just have to wait until an upcoming Friday — as someone will add that functionality. And, at any time, you can add Alexa SKILLS to yours and have third-party control of your Lutron lighting system, your shades and blinds, your HVAC, your TV, your DVR, your CableTV box, your SONOS, your home security system, your cameras and even your washer and dryer (yes, I have that functionality — even though I can’t find a use for it yet). It can even call people, play any radio station in the world, play movie previews, play Jeopardy!, play Sirius/XM radio — I mean, it can do nearly anything. It’s squarely aimed at the average consumer or home application — all to simplify your life.

But it could EASILY be upgraded to control everything. I mean everything. Projectors, digital signage networks, AV-over-IP systems, scalers, switchers, audio systems, commercial-grade lighting systems, rooms, buildings, etc. — you get the idea.

 

 

 

From DSC:
By the way, I wouldn’t be so sure that Bezos doesn’t realize this; there’s very little that gets by that guy.

 

 

Also see:

Crestron to Ship AirBoard Whiteboarding Solution — from ravepubs.com by Sara Abrons

Excerpt:

Crestron will soon be shipping the Crestron AirBoard PoE electronic whiteboard technology. Crestron AirBoard enables viewing of electronic whiteboard content on any display device, thereby solving the problem of meeting participants — remote participants, especially — not being able to see the whiteboard unless they’re seated with a direct line of sight.

With Crestron AirBoard, annotations can be saved and then posted, emailed, or texted to either a central web page (education applications) or to invited participants (corporate applications). Meeting participants simply choose “whiteboard” as a source on the in-room Crestron TSW or Crestron Mercury touch screen to start the session. When “end meeting” is selected, the user is prompted to save and send the file.

 

 

 

Home Voice Control — See Josh Micro in Action!

 

 

From DSC:
Along these lines, will faculty use their voices to control their room setups (i.e., the projection, shades, room lighting, what’s shown on the LMS, etc.)?

Or will machine-to-machine communications, the Internet of Things, sensors, mobile/cloud-based apps, and the like take care of those items automatically when a faculty member walks into the room?

 

 

 

From DSC:
Here’s a quote that has been excerpted from the announcement below…and it’s the type of service that will be offered in our future learning ecosystems — our next generation learning platforms:

 

Career Insight™ enables prospective students to identify programs of study which can help them land the careers they want: Career Insight™ describes labor market opportunities associated with programs of study to prospective students. The recommendation engine also matches prospective students to programs based on specific career interests.

 

But in addition to our future learning platforms pointing new/prospective students to physical campuses, the recommendation engines will also provide immediate access to digital playlists for the prospective students/learners to pursue from their living rooms (or as they are out and about…i.e., mobile access).

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

Artificial intelligence working with enormous databases to build/update recommendation engines…yup, I could see that. Lifelong learning. Helping people know what to reinvent themselves to.

 

 


 

Career Insight™ Lets Prospective Students Connect Academic Program Choices to Career Goals — from burning-glass.com; also from Hadley Dreibelbis from Finn Partners
New Burning Glass Technologies Product Brings Job Data into Enrollment Decisions

BOSTON—Burning Glass Technologies announces the launch of Career Insight™, the first tool to show prospective students exactly how course enrollment will advance their careers.

Embedded in institutional sites and powered by Burning Glass’ unparalleled job market data, Career Insight’s personalized recommendation engine matches prospective students with programs based on their interests and goals. Career Insight will enable students to make smarter decisions, as well as improve conversion and retention rates for postsecondary institutions.

“A recent Gallup survey found that 58% of students say career outcomes are the most important reason to continue their education,” Burning Glass CEO Matthew Sigelman said. “That’s particularly true for the working learners who are now the norm on college campuses. Career Insight™ is a major step in making sure that colleges and universities can speak their language from the very first.”

Beginning an educational program with a firm, realistic career goal can help students persist in their studies. Currently only 29% of students in two-year colleges and 59% of those in four-year institutions complete their degrees within six years.

Career Insight™ enables prospective students to identify programs of study which can help them land the careers they want:

  • Career Insight™ describes labor market opportunities associated with programs of study to prospective students. The recommendation engine also matches prospective students to programs based on specific career interests.
  • The application provides insights to enrollment, advising, and marketing teams into what motivates prospective students, analysis that will guide the institution in improving program offerings and boosting conversion.
  • Enrollment advisors can also walk students through different career and program scenarios in real time.

Career Insight™ is driven by the Burning Glass database of a billion job postings and career histories, collected from more than 40,000 online sources daily. The database, powered by a proprietary analytic taxonomy, provides insight into what employers need much faster and in more detail than any other sources.

Career Insight™ is powered by the same rich dataset Burning Glass delivers to hundreds of leading corporate and education customers – from Microsoft and Accenture to Harvard University and Coursera.

More information is available at http://burning-glass.com/career-insight.

 


 

 

From Elliott Masie’s Learning TRENDS – January 3, 2018.
#986 – Updates on Learning, Business & Technology Since 1997.

2. Curation in Action – Meural Picture Frame of Endless Art. 
What a cool Curation Holiday Gift that arrived.  The Meural Picture Frame is an amazing digital display, 30 inches by 20 inches, that will display any of over 10,000 classical or modern paintings or photos from the world’s best museums.

A few minutes of setup to the WiFi and my Meural became a highly personalized museum in the living room.  I selected collections of an era, a specific artist, a theme or used someone else’s art “playlist”.

It is curation at its best!  A personalized and individualized selection from an almost limitless collection.  Check it out at http://www.meural.com

 



Also see:



 

Discover new art every day with Meural

 

 

Discover new artwork with Meural -- you can browse playlists of artwork and/or add your own

 

 

 

 

From DSC:
As I understand it, you can upload your own artwork and photography into this platform. As such, couldn’t we put such devices/frames in schools?!

Wouldn’t it be great to have each classroom’s artwork available as a playlist?! And not just the current pieces, but archived pieces as well!

Wouldn’t it be cool to be able to walk down the hall and swish through a variety of pieces?

Wouldn’t such a dynamic, inspirational platform be a powerful source of creativity in our hallways?  The frames could display the greatest works of art from around the world!

Wouldn’t such a platform give young/budding artists and photographers incentive to do their best work, knowing many others can see their creative works as a part of a playlist?

Wouldn’t it be cool to tap into such a service and treasure chest of artwork and photography via your Smart/Connected TV?

Here’s to creativity!

 

 

 

 

 

 

DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from phys.org by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.

 

Also related/see:

 

  • TeachLive.org
    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.

 

 

 

 

 

This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.

 

This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.

 

 


From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.

 

I’m tracking these developments at:
http://danielschristian.com/thelivingclassroom/

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


*  Also see:


Blockchain, Bitcoin and the Tokenization of Learning — from edsurge.com by Sydney Johnson

Excerpt:

In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.

 

 

 

 

Creative voice tech — from jwtintelligence.com by Ella Britton
New programs from Google and the BBC use voice to steer storytelling with digital assistants.

Exceprt:

BBC Radio’s newest program, The Inspection Chamber, uses smart home devices to allow listeners to interact with and control the plot. Amid a rise in choose-your-own-adventure style programming, The Inspection Chamber opens up creative new possibilities for brands hoping to make use of voice assistants.

The Inspection Chamber tells the story of an alien stranded on earth, who is being interrogated by scientists and an AI robot called Dave. In this interactive drama, Amazon Echo and Google Home users play the part of the alien, answering questions and interacting with other characters to determine the story’s course. The experience takes around 20 minutes, with questions like “Cruel or Kind?” and “Do you like puzzles?” that help the scientists categorize a user.

“Voice is going to be a key way we interact with media, search for content, and find what we want,” said BBC director general Tony Hall. As describedin the Innovation Group’s Speak Easy report, the opportunities for brands to connect with consumers via smart speakers are substantial, particularly when it comes to education and entertainment. Brands can also harness these opportunities by creating entertaining and engaging content that consumers can interact with, creating what feels like a two-way dialogue.

 

From DSC:
More and more, our voices will drive the way we interact with computing devices/applications. This item was an especially interesting item to me, as it involves the use of our voice — at home — to steer the storytelling that’s taking place. Talk about a new form of interactivity!

 

 

 

 

Google’s jobs AI service hits private beta, now works in 100 languages — from venturebeat.com by Blair Hanley Frank

Excerpt:

Google today announced the beta release of its Cloud Job Discovery service, which uses artificial intelligence to help customers connect job vacancies with the people who can fill them.

Formerly known as the Cloud Jobs API, the system is designed to take information about open positions and help job seekers take better advantage of it. For example, Cloud Job Discovery can take a plain language query and help translate that to the specific jargon employers use to describe their positions, something that can be hard for potential employees to navigate.

As part of this beta release, Google announced that Cloud Job Discovery is now designed to work with applicant-tracking systems and staffing agencies, in addition to job boards and career site providers like CareerBuilder.

It also now works in 100 languages. While the service is still primarily aimed at customers in the U.S., some of Google’s existing clients need support for multiple languages. In the future, the company plans to expand the Cloud Job Discovery service internationally, so investing in language support now makes sense going forward.

 



From DSC:
Now tie this type of job discovery feature into a next generation learning platform, helping people identify which skills they need to get jobs in their local area(s). Provide a list of courses/modules/RSS feeds to get them started. Allow folks to subscribe to constant streams of content and unsubscribe to them at any time as well.

 

 

We MUST move to lifelong, constant learning via means that are highly accessible, available 24×7, and extremely cost effective. Blockchain-based technologies will feed web-based learner profiles, which each of us will determine who can write to our learning profile and who can review it as well.

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 



Addendum on 9/29/17:



  • Facebook partners with ZipRecruiter and more aggregators as it ramps up in jobs — from techcrunch.com by Ingrid Lunden
    Excerpt:
    Facebook has made no secret of its wish to do more in the online recruitment market — encroaching on territory today dominated by LinkedIn, the leader in tapping social networking graphs to boost job-hunting. Today, Facebook is taking the next step in that process.
    Facebook will now integrate with ZipRecruiter — an aggregator that allows those looking to fill jobs to post ads to many traditional job boards, as well as sites like LinkedIn, Google and Twitter — to boost the number of job ads available on its platform targeting its 2 billion monthly active users.
    The move follows Facebook launching its first job ads earlier this year, and later appearing to be interested in augmenting that with more career-focused features, such as a platform to connect people looking for mentors with those looking to offer mentorship.

 

 

 

Six reasons why disruption is coming to learning departments — from feathercap.net, with thanks to Mr. Tim Seager for this resource

Excerpts:

  1. Training materials and interactions will not just be pre-built courses but any structured or unstructured content available to the organization.
  2. Curation of all learning, employee or any useful organizational content will become a whole lot easier.
  3. The learning department won’t have to build it all themselves.
  4. Learning bots and voice enabled learning.
  5. Current workplace learning systems and LMSs will go through a big transition or they will lose relevancy.
  6. Learning departments will go beyond onboarding, compliance training and leadership training and move to training everyone in the company on all job skills.

 

A successful example of this is Amazon.com. As a shopper on their site we have access to millions of  book and product SKUs. Amazon uses a combination of all three techniques to position the right book or product based on our behavior, peer experiences as well as having a semantic understanding of the product page we’re viewing. There’s no reason we can’t have the same experience on workplace learning systems where all viable learning content/ company content could be organized and disseminated to each learner for the right time and circumstance.

 

 



From DSC:
Several items of what Feathercap is saying in their solid posting remind me of a vision of a next generation learning platform:

  • Contributing to — and tapping into — streams of content
  • Lifelong learning and reinventing oneself
  • Artificial intelligence, including Natural Language Processing (NLP) and the use of voice to drive systems/functionality
  • Learning agents/bots
  • 24×7 access
  • Structured and unstructured learning
  • Socially-based means of learning
  • …and more

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

 

From DSC:
The vast majority of the lessons being offered within K-12 and the lectures (if we’re going to continue to offer them) within higher education should be recorded.

Why do I say this?

Well…first of all…let me speak as a parent of 3 kids, one of whom requires a team of specialists to help her learn. When she misses school because she’s out sick, it’s a major effort to get her caught up. As a parent, it would be soooooo great to log into a system and obtain an updated digital playlist of the lessons that she’s missed. She and I could click on the links to the recordings in order to see how the teacher wants our daughter to learn concepts A, B, and C. We could pause, rewind, fast forward, and replay the recording over and over again until our daughter gets it (and I as a parent get it too!).

I realize that I’m not saying anything especially new here, but we need to do a far better job of providing our overworked teachers with more time, funding, and/or other types of resources — such as instructional designers, videographers and/or One-Button Studios, other multimedia specialists, etc. — to develop these recordings. Perhaps each teacher — or team — could be paid to record and contribute their lessons to a pool of content that could be used over and over again. Also, the use of RSS feeds and content aggregators such as Feedly could come in handy here as well. Parents/learners could subscribe to streams of content.

Such a system would be a huge help to the teachers as well. They could refer students to these digital playlists as appropriate — having updated the missing students’ playlists based on what the teacher has covered that day (and who was out sick, at another school-sponsored event, etc.). They wouldn’t have to re-explain something as many times if they had recordings to reference.

—–

Also, within the realm of higher education, having recordings/transcripts of lectures and presentations would be especially helpful to learners who take more time to process what’s being said. And while that might include ESL learners here in the U.S., such recordings could benefit the majority of learners. From my days in college, I can remember trying to write down much of what the professor was saying, but not having a chance to really process much of the information until later, when I looked over my notes. Finally, learners who wanted to review some concepts before a mid-term or final would greatly appreciate these recordings.

Again, I realize this isn’t anything new. But it makes me scratch my head and wonder why we haven’t made more progress in this area, especially at the K-12 level…? It’s 2017. We can do better.

 



Some relevant tools here include:



 

 

 

 

 

 

 



Also see:



 

Everything Apple Announced — from wired.comby Arielle Pardes

Excerpt:

To much fanfare, Apple CEO Tim Cook unveiled the next crop of iPhones [on 9/12/17] at the new Steve Jobs Theater on Apple’s new headquarters in Cupertino. With the introduction of three new phones, Cook made clear that Apple’s premiere product is very much still evolving. The iPhone X, he said, represents “the future of smartphones”: a platform for augmented reality, a tool for powerful computing, screen for everything. But it’s not all about the iPhones. The event also brought with it a brand new Apple Watch, upgrades to Apple TV, and a host of other features coming to the Apple ecosystem this fall. Missed the big show? Check out our archived live coverage of Apple’s big bash, and read all the highlights below.

 

 

iPhone Event 2017 — from techcrunch.com

From DSC:
A nice listing of articles that cover all of the announcements.

 

 

Apple Bets on Augmented Reality to Sell Its Most Expensive Phone — from bloomberg.com by Alex Webb and Mark Gurman

Excerpt:

Apple Inc. packed its $1,000 iPhone with augmented reality features, betting the nascent technology will persuade consumers to pay premium prices for its products even as cheaper alternatives abound.

The iPhone X, Apple’s most expensive phone ever, was one of three new models Chief Executive Officer Tim Cook showed off during an event at the company’s new $5 billion headquarters in Cupertino, California, on Tuesday. It also rolled out an updated Apple Watch with a cellular connection and an Apple TV set-top box that supports higher-definition video.

Augmented Reality
Apple executives spent much of Tuesday’s event describing how AR is at the core of the new flagship iPhone X. Its new screen, 3-D sensors, and dual cameras are designed for AR video games and other more-practical uses such as measuring digital objects in real world spaces. Months before the launch, Apple released a tool called ARKit that made it easier for developers to add AR capabilities to their apps.

These technologies have never been available in consumer devices and “solidify the platform on which Apple will retain and grow its user base for the next decade,” Gene Munster of Loup Ventures wrote in a note following Apple’s event.

The company is also working on smart glasses that may be AR-enabled, people familiar with the plan told Bloomberg earlier this year.

 

 

Meet the iPhone X, Apple’s New High-End Handset — from wired.com by David Pierce

Excerpt:

First of all, the X looks like no other phone. It doesn’t even look like an iPhone. On the front, it’s screen head to foot, save for a small trapezoidal notch taken out of the top where Apple put selfie cameras and sensors. Otherwise, the bezel around the edge of the phone has been whittled to near-nonexistence and the home button disappeared—all screen and nothing else. The case is made of glass and stainless steel, like the much-loved iPhone 4. The notched screen might take some getting used to, but the phone’s a stunner. It goes on sale starting at $999 on October 27, and it ships November 3.

If you can’t get your hands on an iPhone X in the near future, Apple still has two new models for you. The iPhone 8 and 8 Plus both look like the iPhone 7—with home buttons!—but offer a few big upgrades to match the iPhone X. Both new models support wireless charging, run the latest A11 Bionic processor, and have 2 gigs of RAM. They also have glass backs, which gives them a glossy new look. They don’t have OLED screens, but they’re getting the same TrueTone tech as the X, and they can shoot video in 4K.

 

 

Apple Debuts the Series 3 Apple Watch, Now With Cellular — from wired.com by David Pierce

 

 

 

Ikea and Apple team up on augmented reality home design app — from curbed.com by Asad Syrkett
The ‘Ikea Place’ app lets shoppers virtually test drive furniture

 

 

The New Apple iPhone 8 Is Built for Photography and Augmented Reality — from time.com by Alex Fitzpatrick

Excerpt:

Apple says the new iPhones are also optimized for augmented reality, or AR, which is software that makes it appear that digital images exist in the user’s real-world environment. Apple SVP Phil Schiller demonstrated several apps making use of AR technology, from a baseball app that shows users player statistics when pointing their phone at the field to a stargazing app that displays the location of constellations and other celestial objects in the night sky. Gaming will be a major use case for AR as well.

Apple’s new iPhone 8 and iPhone 8 plus will have wireless charging as well. Users will be able to charge the device by laying it down on a specially-designed power mat on their desk, bedside table or inside their car, similar to how the Apple Watch charges. (Competing Android devices have long had a similar feature.) Apple is using the Qi wireless charging standard for the iPhones.

 

 

Why you shouldn’t unlock your phone with your face — from medium.com by Quincy Larson

Excerpt:

Today Apple announced its new FaceID technology. It’s a new way to unlock your phone through facial recognition. All you have to do is look at your phone and it will recognize you and unlock itself. At time of writing, nobody outside of Apple has tested the security of FaceID. So this article is about the security of facial recognition, and other forms of biometric identification in general.

Historically, biometric identification has been insecure. Cameras can be tricked. Voices can be recorded. Fingerprints can be lifted. And in many countries?—?including the US?—?the police can legally force you to use your fingerprint to unlock your phone. So they can most certainly point your phone at your face and unlock it against your will. If you value the security of your data?—?your email, social media accounts, family photos, the history of every place you’ve ever been with your phone?—?then I recommend against using biometric identification.

Instead, use a passcode to unlock your phone.

 

 

The iPhone lineup just got really compleX  — from techcrunch.com by Josh Constine

 

 

 

 

 

Apple’s ‘Neural Engine’ Infuses the iPhone With AI Smarts — from wired.com by Tom Simonite

Excerpt:

When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would “set the path for technology for the next decade.” Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors.

That feature is the “neural engine,” part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

Chip experts say the neural engine could become central to the future of the iPhone as Apple moves more deeply into areas such as augmented reality and image recognition, which rely on machine-learning algorithms. They predict that Google, Samsung, and other leading mobile-tech companies will soon create neural engines of their own. Earlier this month, China’s Huawei announced a new mobile chip with a dedicated “neural processing unit” to accelerate machine learning.

 

 

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 
© 2025 | Daniel Christian