Google’s jobs AI service hits private beta, now works in 100 languages — from venturebeat.com by Blair Hanley Frank

Excerpt:

Google today announced the beta release of its Cloud Job Discovery service, which uses artificial intelligence to help customers connect job vacancies with the people who can fill them.

Formerly known as the Cloud Jobs API, the system is designed to take information about open positions and help job seekers take better advantage of it. For example, Cloud Job Discovery can take a plain language query and help translate that to the specific jargon employers use to describe their positions, something that can be hard for potential employees to navigate.

As part of this beta release, Google announced that Cloud Job Discovery is now designed to work with applicant-tracking systems and staffing agencies, in addition to job boards and career site providers like CareerBuilder.

It also now works in 100 languages. While the service is still primarily aimed at customers in the U.S., some of Google’s existing clients need support for multiple languages. In the future, the company plans to expand the Cloud Job Discovery service internationally, so investing in language support now makes sense going forward.

 



From DSC:
Now tie this type of job discovery feature into a next generation learning platform, helping people identify which skills they need to get jobs in their local area(s). Provide a list of courses/modules/RSS feeds to get them started. Allow folks to subscribe to constant streams of content and unsubscribe to them at any time as well.

 

 

We MUST move to lifelong, constant learning via means that are highly accessible, available 24×7, and extremely cost effective. Blockchain-based technologies will feed web-based learner profiles, which each of us will determine who can write to our learning profile and who can review it as well.

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 



Addendum on 9/29/17:



  • Facebook partners with ZipRecruiter and more aggregators as it ramps up in jobs — from techcrunch.com by Ingrid Lunden
    Excerpt:
    Facebook has made no secret of its wish to do more in the online recruitment market — encroaching on territory today dominated by LinkedIn, the leader in tapping social networking graphs to boost job-hunting. Today, Facebook is taking the next step in that process.
    Facebook will now integrate with ZipRecruiter — an aggregator that allows those looking to fill jobs to post ads to many traditional job boards, as well as sites like LinkedIn, Google and Twitter — to boost the number of job ads available on its platform targeting its 2 billion monthly active users.
    The move follows Facebook launching its first job ads earlier this year, and later appearing to be interested in augmenting that with more career-focused features, such as a platform to connect people looking for mentors with those looking to offer mentorship.

 

 

 

Six reasons why disruption is coming to learning departments — from feathercap.net, with thanks to Mr. Tim Seager for this resource

Excerpts:

  1. Training materials and interactions will not just be pre-built courses but any structured or unstructured content available to the organization.
  2. Curation of all learning, employee or any useful organizational content will become a whole lot easier.
  3. The learning department won’t have to build it all themselves.
  4. Learning bots and voice enabled learning.
  5. Current workplace learning systems and LMSs will go through a big transition or they will lose relevancy.
  6. Learning departments will go beyond onboarding, compliance training and leadership training and move to training everyone in the company on all job skills.

 

A successful example of this is Amazon.com. As a shopper on their site we have access to millions of  book and product SKUs. Amazon uses a combination of all three techniques to position the right book or product based on our behavior, peer experiences as well as having a semantic understanding of the product page we’re viewing. There’s no reason we can’t have the same experience on workplace learning systems where all viable learning content/ company content could be organized and disseminated to each learner for the right time and circumstance.

 

 



From DSC:
Several items of what Feathercap is saying in their solid posting remind me of a vision of a next generation learning platform:

  • Contributing to — and tapping into — streams of content
  • Lifelong learning and reinventing oneself
  • Artificial intelligence, including Natural Language Processing (NLP) and the use of voice to drive systems/functionality
  • Learning agents/bots
  • 24×7 access
  • Structured and unstructured learning
  • Socially-based means of learning
  • …and more

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

 

From DSC:
The vast majority of the lessons being offered within K-12 and the lectures (if we’re going to continue to offer them) within higher education should be recorded.

Why do I say this?

Well…first of all…let me speak as a parent of 3 kids, one of whom requires a team of specialists to help her learn. When she misses school because she’s out sick, it’s a major effort to get her caught up. As a parent, it would be soooooo great to log into a system and obtain an updated digital playlist of the lessons that she’s missed. She and I could click on the links to the recordings in order to see how the teacher wants our daughter to learn concepts A, B, and C. We could pause, rewind, fast forward, and replay the recording over and over again until our daughter gets it (and I as a parent get it too!).

I realize that I’m not saying anything especially new here, but we need to do a far better job of providing our overworked teachers with more time, funding, and/or other types of resources — such as instructional designers, videographers and/or One-Button Studios, other multimedia specialists, etc. — to develop these recordings. Perhaps each teacher — or team — could be paid to record and contribute their lessons to a pool of content that could be used over and over again. Also, the use of RSS feeds and content aggregators such as Feedly could come in handy here as well. Parents/learners could subscribe to streams of content.

Such a system would be a huge help to the teachers as well. They could refer students to these digital playlists as appropriate — having updated the missing students’ playlists based on what the teacher has covered that day (and who was out sick, at another school-sponsored event, etc.). They wouldn’t have to re-explain something as many times if they had recordings to reference.

—–

Also, within the realm of higher education, having recordings/transcripts of lectures and presentations would be especially helpful to learners who take more time to process what’s being said. And while that might include ESL learners here in the U.S., such recordings could benefit the majority of learners. From my days in college, I can remember trying to write down much of what the professor was saying, but not having a chance to really process much of the information until later, when I looked over my notes. Finally, learners who wanted to review some concepts before a mid-term or final would greatly appreciate these recordings.

Again, I realize this isn’t anything new. But it makes me scratch my head and wonder why we haven’t made more progress in this area, especially at the K-12 level…? It’s 2017. We can do better.

 



Some relevant tools here include:



 

 

 

 

 

 

 



Also see:



 

Everything Apple Announced — from wired.comby Arielle Pardes

Excerpt:

To much fanfare, Apple CEO Tim Cook unveiled the next crop of iPhones [on 9/12/17] at the new Steve Jobs Theater on Apple’s new headquarters in Cupertino. With the introduction of three new phones, Cook made clear that Apple’s premiere product is very much still evolving. The iPhone X, he said, represents “the future of smartphones”: a platform for augmented reality, a tool for powerful computing, screen for everything. But it’s not all about the iPhones. The event also brought with it a brand new Apple Watch, upgrades to Apple TV, and a host of other features coming to the Apple ecosystem this fall. Missed the big show? Check out our archived live coverage of Apple’s big bash, and read all the highlights below.

 

 

iPhone Event 2017 — from techcrunch.com

From DSC:
A nice listing of articles that cover all of the announcements.

 

 

Apple Bets on Augmented Reality to Sell Its Most Expensive Phone — from bloomberg.com by Alex Webb and Mark Gurman

Excerpt:

Apple Inc. packed its $1,000 iPhone with augmented reality features, betting the nascent technology will persuade consumers to pay premium prices for its products even as cheaper alternatives abound.

The iPhone X, Apple’s most expensive phone ever, was one of three new models Chief Executive Officer Tim Cook showed off during an event at the company’s new $5 billion headquarters in Cupertino, California, on Tuesday. It also rolled out an updated Apple Watch with a cellular connection and an Apple TV set-top box that supports higher-definition video.

Augmented Reality
Apple executives spent much of Tuesday’s event describing how AR is at the core of the new flagship iPhone X. Its new screen, 3-D sensors, and dual cameras are designed for AR video games and other more-practical uses such as measuring digital objects in real world spaces. Months before the launch, Apple released a tool called ARKit that made it easier for developers to add AR capabilities to their apps.

These technologies have never been available in consumer devices and “solidify the platform on which Apple will retain and grow its user base for the next decade,” Gene Munster of Loup Ventures wrote in a note following Apple’s event.

The company is also working on smart glasses that may be AR-enabled, people familiar with the plan told Bloomberg earlier this year.

 

 

Meet the iPhone X, Apple’s New High-End Handset — from wired.com by David Pierce

Excerpt:

First of all, the X looks like no other phone. It doesn’t even look like an iPhone. On the front, it’s screen head to foot, save for a small trapezoidal notch taken out of the top where Apple put selfie cameras and sensors. Otherwise, the bezel around the edge of the phone has been whittled to near-nonexistence and the home button disappeared—all screen and nothing else. The case is made of glass and stainless steel, like the much-loved iPhone 4. The notched screen might take some getting used to, but the phone’s a stunner. It goes on sale starting at $999 on October 27, and it ships November 3.

If you can’t get your hands on an iPhone X in the near future, Apple still has two new models for you. The iPhone 8 and 8 Plus both look like the iPhone 7—with home buttons!—but offer a few big upgrades to match the iPhone X. Both new models support wireless charging, run the latest A11 Bionic processor, and have 2 gigs of RAM. They also have glass backs, which gives them a glossy new look. They don’t have OLED screens, but they’re getting the same TrueTone tech as the X, and they can shoot video in 4K.

 

 

Apple Debuts the Series 3 Apple Watch, Now With Cellular — from wired.com by David Pierce

 

 

 

Ikea and Apple team up on augmented reality home design app — from curbed.com by Asad Syrkett
The ‘Ikea Place’ app lets shoppers virtually test drive furniture

 

 

The New Apple iPhone 8 Is Built for Photography and Augmented Reality — from time.com by Alex Fitzpatrick

Excerpt:

Apple says the new iPhones are also optimized for augmented reality, or AR, which is software that makes it appear that digital images exist in the user’s real-world environment. Apple SVP Phil Schiller demonstrated several apps making use of AR technology, from a baseball app that shows users player statistics when pointing their phone at the field to a stargazing app that displays the location of constellations and other celestial objects in the night sky. Gaming will be a major use case for AR as well.

Apple’s new iPhone 8 and iPhone 8 plus will have wireless charging as well. Users will be able to charge the device by laying it down on a specially-designed power mat on their desk, bedside table or inside their car, similar to how the Apple Watch charges. (Competing Android devices have long had a similar feature.) Apple is using the Qi wireless charging standard for the iPhones.

 

 

Why you shouldn’t unlock your phone with your face — from medium.com by Quincy Larson

Excerpt:

Today Apple announced its new FaceID technology. It’s a new way to unlock your phone through facial recognition. All you have to do is look at your phone and it will recognize you and unlock itself. At time of writing, nobody outside of Apple has tested the security of FaceID. So this article is about the security of facial recognition, and other forms of biometric identification in general.

Historically, biometric identification has been insecure. Cameras can be tricked. Voices can be recorded. Fingerprints can be lifted. And in many countries?—?including the US?—?the police can legally force you to use your fingerprint to unlock your phone. So they can most certainly point your phone at your face and unlock it against your will. If you value the security of your data?—?your email, social media accounts, family photos, the history of every place you’ve ever been with your phone?—?then I recommend against using biometric identification.

Instead, use a passcode to unlock your phone.

 

 

The iPhone lineup just got really compleX  — from techcrunch.com by Josh Constine

 

 

 

 

 

Apple’s ‘Neural Engine’ Infuses the iPhone With AI Smarts — from wired.com by Tom Simonite

Excerpt:

When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would “set the path for technology for the next decade.” Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors.

That feature is the “neural engine,” part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

Chip experts say the neural engine could become central to the future of the iPhone as Apple moves more deeply into areas such as augmented reality and image recognition, which rely on machine-learning algorithms. They predict that Google, Samsung, and other leading mobile-tech companies will soon create neural engines of their own. Earlier this month, China’s Huawei announced a new mobile chip with a dedicated “neural processing unit” to accelerate machine learning.

 

 

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 

Don’t discount the game-changing power of the morphing “TV” when coupled with AI, NLP, and blockchain-based technologies! [Christian]

From DSC:

Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!

When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?

For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.

Some of the questions that come to my mind:

  • Why can’t there be more educationally-related games and apps available on this type of platform?
  • Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
  • When will potential employers start asking for access to such web-based learner profiles?
  • Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
  • Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
  • Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
  • Will virtual tutoring be one of the available apps/channels?
  • Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)

 

Forget a streaming stick: These 4K TVs come with Amazon Fire TV inside — from techradar.com by Nick Pino

Excerpt:

The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.

 

 

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


Addendums


 

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

.

  • Once thought to be a fad, MOOCs showed staying power in 2016 — from educationdive.com
    Dive Brief:

    • EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
    • The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
    • Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
 

Some reflections/resources on today’s announcements from Apple

tv-app-apple-10-27-16

 

tv-app2-apple-10-27-16

From DSC:
How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?

That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:

  • A credentials database on LinkedIn (via blockchain)
    and/or
  • A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
    and/or
  • To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)

Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?

Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?

Will bots and/or human tutors be instantly accessible from our couches?

Will we be able to meet virtually via our TVs and share our computing devices?

 

bigscreen_rocket_league

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 


Other items on today’s announcements:


 

 

macbookpro-10-27-16

 

 

All the big announcements from Apple’s Mac event — from amp.imore.com by Joseph Keller

  • MacBook Pro
  • Final Cut Pro X
  • Apple TV > new “TV” app
  • Touch Bar

 

Apple is finally unifying the TV streaming experience with new app — from techradar.com by Nick Pino

 

 

How to migrate your old Mac’s data to your new Mac — from amp.imore.com by Lory Gil

 

 

MacBook Pro FAQ: Everything you need to know about Apple’s new laptops — from amp.imore.com by Serenity Caldwell

 

 

Accessibility FAQ: Everything you need to know about Apple’s new accessibility portal — from imore.com by Daniel Bader

 

 

Apple’s New MacBook Pro Has a ‘Touch Bar’ on the Keyboard — from wired.com by Brian Barrett

 

 

Apple’s New TV App Won’t Have Netflix or Amazon Video — from wired.com by Brian Barrett

 

 

 

 

Apple 5th Gen TV To Come With Major Software Updates; Release Date Likely In 2017 — from mobilenapps.com

 

 

 

 

IBM Foundation collaborates with AFT and education leaders to use Watson to help teachers — from finance.yahoo.com

Excerpt:

ARMONK, N.Y., Sept. 28, 2016 /PRNewswire/ — Teachers will have access to a new, first-of-its-kind, free tool using IBM’s innovative Watson cognitive technology that has been trained by teachers and designed to strengthen teachers’ instruction and improve student achievement, the IBM Foundation and the American Federation of Teachers announced today.

Hundreds of elementary school teachers across the United States are piloting Teacher Advisor with Watson – an innovative tool by the IBM Foundation that provides teachers with a complete, personalized online resource. Teacher Advisor enables teachers to deepen their knowledge of key math concepts, access high-quality vetted math lessons and acclaimed teaching strategies and gives teachers the unique ability to tailor those lessons to meet their individual classroom needs.

Litow said there are plans to make Teacher Advisor available to all elementary school teachers across the U.S. before the end of the year.

 

 

In this first phase, Teacher Advisor offers hundreds of high-quality vetted lesson plans, instructional resources, and teaching techniques, which are customized to meet the needs of individual teachers and the particular needs of their students.

 

 

Also see:

teacheradvisor-sept282016

 

Educators can also access high-quality videos on teaching techniques to master key skills and bring a lesson or teaching strategy to life into their classroom.

 

 

From DSC:
Today’s announcement involved personalization and giving customized directions, and it caused my mind to go in a slightly different direction. (IBM, Google, Microsoft, Apple, Amazon, and others like Smart Sparrow are likely also thinking about this type of direction as well. Perhaps they’re already there…I’m not sure.)

But given the advancements in machine learning/cognitive computing (where example applications include optical character recognition (OCR) and computer vision), how much longer will it be before software is able to remotely or locally “see” what a third grader wrote down for a given math problem (via character and symbol recognition) and “see” what the student’s answer was while checking over the student’s work…if the answer was incorrect, the algorithms will likely know where the student went wrong.  The software will be able to ascertain what the student did wrong and then show them how the problem should be solved (either via hints or by showing the entire problem to the student — per the teacher’s instructions/admin settings). Perhaps, via natural language processing, this process could be verbalized as well.

Further questions/thoughts/reflections then came to my mind:

  • Will we have bots that teachers can use to teach different subjects? (“Watson may even ask the teacher additional questions to refine its response, honing in on what the teacher needs to address certain challenges.)
  • Will we have bots that students can use to get the basics of a given subject/topic/equation?
  • Will instructional designers — and/or trainers in the corporate world — need to modify their skillsets to develop these types of bots?
  • Will teachers — as well as schools of education in universities and colleges — need to modify their toolboxes and their knowledgebases to take advantage of these sorts of developments?
  • How might the corporate world take advantage of these trends and technologies?
  • Will MOOCs begin to incorporate these sorts of technologies to aid in personalized learning?
  • What sorts of delivery mechanisms could be involved? Will we be tapping into learning-related bots from our living rooms or via our smartphones?

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Also see:

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian