AWS unveils ‘Transcribe’ and ‘Translate’ machine learning services — from business-standard.com

Excerpts:

  • Amazon “Transcribe” provides grammatically correct transcriptions of audio files to allow audio data to be analyzed, indexed and searched.
  • Amazon “Translate” provides natural sounding language translation in both real-time and batch scenarios.

 

 

Google’s ‘secret’ smart city on Toronto’s waterfront sparks row — from bbc.com by Robin Levinson-King BBC News, Toronto

Excerpt:

The project was commissioned by the publically funded organisation Waterfront Toronto, who put out calls last spring for proposals to revitalise the 12-acre industrial neighbourhood of Quayside along Toronto’s waterfront.

Prime Minister Justin Trudeau flew down to announce the agreement with Sidewalk Labs, which is owned by Google’s parent company Alphabet, last October, and the project has received international attention for being one of the first smart-cities designed from the ground up.

But five months later, few people have actually seen the full agreement between Sidewalk and Waterfront Toronto.

As council’s representative on Waterfront Toronto’s board, Mr Minnan-Wong is the only elected official to actually see the legal agreement in full. Not even the mayor knows what the city has signed on for.

“We got very little notice. We were essentially told ‘here’s the agreement, the prime minister’s coming to make the announcement,'” he said.

“Very little time to read, very little time to absorb.”

Now, his hands are tied – he is legally not allowed to comment on the contents of the sealed deal, but he has been vocal about his belief it should be made public.

“Do I have concerns about the content of that agreement? Yes,” he said.

“What is it that is being hidden, why does it have to be secret?”

From DSC:
Google needs to be very careful here. Increasingly so these days, our trust in them (and other large tech companies) is at stake.

 

 

Addendum on 4/16/18 with thanks to Uros Kovacevic for this resource:
Human lives saved by robotic replacements — from injuryclaimcoach.com

Excerpt:

For academics and average workers alike, the prospect of automation provokes concern and controversy. As the American workplace continues to mechanize, some experts see harsh implications for employment, including the loss of 73 million jobs by 2030. Others maintain more optimism about the fate of the global economy, contending technological advances could grow worldwide GDP by more than $1.1 trillion in the next 10 to 15 years. Whatever we make of these predictions, there’s no question automation will shape the economic future of the nation – and the world.

But while these fiscal considerations are important, automation may positively affect an even more essential concern: human life. Every day, thousands of Americans risk injury or death simply by going to work in dangerous conditions. If robots replaced them, could hundreds of lives be saved in the years to come?

In this project, we studied how many fatal injuries could be averted if dangerous occupations were automated. To do so, we analyzed which fields are most deadly and the likelihood of their automation according to expert predictions. To see how automation could save Americans’ lives, keep reading.

Also related to this item is :
How AI is improving the landscape of work  — from forbes.com by Laurence Bradford

Excerpts:

There have been a lot of sci-fi stories written about artificial intelligence. But now that it’s actually becoming a reality, how is it really affecting the world? Let’s take a look at the current state of AI and some of the things it’s doing for modern society.

  • Creating New Technology Jobs
  • Using Machine Learning To Eliminate Busywork
  • Preventing Workplace Injuries With Automation
  • Reducing Human Error With Smart Algorithms

From DSC:
This is clearly a pro-AI piece. Not all uses of AI are beneficial, but this article mentions several use cases where AI can make positive contributions to society.

 

 

 

It’s About Augmented Intelligence, not Artificial Intelligence — from informationweek.com
The adoption of AI applications isn’t about replacing workers but helping workers do their jobs better.

 

From DSC:
This article is also a pro-AI piece. But again, not all uses of AI are beneficial. We need to be aware of — and involved in — what is happening with AI.

 

 

 

Investing in an Automated Future — from clomedia.com by Mariel Tishma
Employers recognize that technological advances like AI and automation will require employees with new skills. Why are so few investing in the necessary learning?

 

 

 

 

 

Experience Virtual Reality on the web with Chrome — from blog.google

Excerpt:

Virtual reality (VR) lets you tour the Turkish palace featured in “Die Another Day,” learn about life in a Syrian refugee camp firsthand, and walk through your dream home right from your living room. With the latest version of Chrome, we’re bringing VR to the web—making it as easy to step inside Air Force One as it is to access your favorite webpage.

For a fully immersive experience, use Chrome with your Daydream-ready phone and Daydream View—just browse to a VR experience you want to view, choose to enter VR, and put the phone in your Daydream View headset. If you don’t have a headset you can view VR content on any phone or desktop computer and interact using your finger or mouse.

You can already try out some great VR-enabled sites, with more coming soon. For example, explore the intersection of humans, nature and technology in the interactive documentary Bear 71. Questioning how we see the world through the lens of technology, this story blurs the lines between the wild world and the wired one.

 

 

Learn A New Language With Your Mobile Using MondlyAR — from vrfocus.com by
Start learn a new language today on your Android device.

Excerpt:

MondlyAR features an avatar “teacher” who brings virtual objects – planets, animals, musical instruments and more – into the room as teaching tools, engages the user in conversations and gives instant feedback on pronunciation thanks to chatbot technology. By incorporating these lifelike elements in the lessons, students are more likely to understand, process, and retain what they are taught.

Users will have seven languages to chose from, American English, British English, French, Spanish, Italian, Portuguese, and German with the studio expecting to be able to offer no less than 30 languages in AR by the next update in August 2018.

 

 

Augmented Reality takes 3-D printing to next level — from rtoz.org

Excerpt:

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work. To use the Robotic Modeling Assistant (RoMA), a designer wears an AR headset with hand controllers. As soon as a design feature is completed, the robotic arm prints the new feature.

 

 

 

The Legal Hazards of Virtual Reality and Augmented Reality Apps — from by Tam Harbert
Liability and intellectual property issues are just two areas developers need to know about

Excerpt:

As virtual- and augmented-reality technologies mature, legal questions are emerging that could trip up VR and AR developers. One of the first lawyers to explore these questions is Robyn Chatwood, of the international law firm Dentons. “VR and AR are areas where the law is just not keeping up with [technology] developments,” she says. IEEE Spectrum contributing editor Tam Harbert talked with Chatwood about the legal challenges.

 

 

Why VR has a bright future in the elearning world — elearninglearning.com by Origin Learning

Excerpt:

The Benefits of Using Virtual Reality in eLearning

  • It offers a visual approach – According to numerous studies, people retain what they have read better when they are able to see it or experience it somehow. VR in eLearning makes this possible and creates a completely new visual experience to improve learners’ retention capacity and their understanding of the material.
  • It lowers the risk factor – VR in eLearning can simulate dangerous and risky situations in an environment that is controllable, so that it removes the risk factor usually associated with such situations. This lets learners alleviate their fear of making a mistake.
  • It facilitates complex data – Like the visual approach, when learners can really experience complex situations, they are more likely to handle them with ease. VR simplifies the complexity of those situations, allowing learners to actually experience everything themselves, rather than just reading about it.
  • It offers remote access – VR in eLearning doesn’t require an actual classroom so that learning can be conducted remotely, which can help you save a lot of time and money that would normally have to be spent on planning a complete learning program.
  • It provides real-life scenarios – As mentioned, one of the greatest things about VR in the context of eLearning is that it allows learners to really immerse themselves in various virtual scenarios. For instance, if the learning program involves some real situation that a certain business has faced before, an employee will be able to handle such a situation more efficiently after experiencing it virtually.
  • It is fun and innovative – People love to try out new things. VR offers a completely innovative and interactive approach to learning and makes learning become an entertaining, rather than an everyday dull process.

 

5 reasons to use augmented reality in education — from kitaboo.com

Excerpt:

[AR] is making it possible to add a layer of enhanced reality to a context-sensitive virtual world. This gives educators and trainers numerous possibilities to enhance the learning experience, making it lively, significant and circumstantial to the learner.

According to the investment company, Goldman Sachs, Augmented Reality “has the potential to become a standard tool in education and could revolutionize the way in which students are taught, for both the K-12 segment and higher education.” The company further projects that by 2025, there would be 15 million users of educational AR worldwide, representing a $700 million market.

Let’s have a look at 5 main reasons to use Augmented Reality in education.

 

 

 

The Difference Between Virtual Reality, Augmented Reality And Mixed Reality — from forbes.com

 

 

 

 

 

How to Set Up a VR Pilot — from campustechnology.com by Dian Schaffhauser
As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.

Excerpt:

The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.

The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.

 

 

 

The Future of Language Learning: Augmented Reality vs Virtual Reality — from medium.com by Denis Hurley

Excerpts:

Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.

While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.

I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.

 

 

Bose has a plan to make AR glasses — from cnet.com by David Carnoy
Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.

Excerpts:

“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”

The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”

Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”

 

Bose is making AR glasses focused on audio, not visuals

Here are some examples Bose gave for how it might be used:

    • For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
    • Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
  • With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
  • Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.

 

 

The 10 Best VR Apps for Classrooms Using Merge VR’s New Merge Cube — from edsurge.com

 

Google Lens arrives on iOS — from techcrunch.com by Sarah Perez

Excerpt:

On the heels of last week’s rollout on Android, Google’s  new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.

For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.

 

The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.

 

 

15 Top Augmented Reality (AR) Apps Changing Education — from vudream.com by Steven Wesley

 

 

 

CNN VR App Brings News to Oculus Rift — from vrscout.com by Jonathan Nafarrete

 

 

 

 

DC: The next generation learning platform will likely offer us such virtual reality-enabled learning experiences such as this “flight simulator for teachers.”

Virtual reality simulates classroom environment for aspiring teachers — from phys.org by Charles Anzalone, University at Buffalo

Excerpt (emphasis DSC):

Two University at Buffalo education researchers have teamed up to create an interactive classroom environment in which state-of-the-art virtual reality simulates difficult student behavior, a training method its designers compare to a “flight simulator for teachers.”

The new program, already earning endorsements from teachers and administrators in an inner-city Buffalo school, ties into State University of New York Chancellor Nancy L. Zimpher’s call for innovative teaching experiences and “immersive” clinical experiences and teacher preparation.

The training simulator Lamb compared to a teacher flight simulator uses an emerging computer technology known as virtual reality. Becoming more popular and accessible commercially, virtual reality immerses the subject in what Lamb calls “three-dimensional environments in such a way where that environment is continuous around them.” An important characteristic of the best virtual reality environments is a convincing and powerful representation of the imaginary setting.

 

Also related/see:

 

  • TeachLive.org
    TLE TeachLivE™ is a mixed-reality classroom with simulated students that provides teachers the opportunity to develop their pedagogical practice in a safe environment that doesn’t place real students at risk.  This lab is currently the only one in the country using a mixed reality environment to prepare or retrain pre-service and in-service teachers. The use of TLE TeachLivE™ Lab has also been instrumental in developing transition skills for students with significant disabilities, providing immediate feedback through bug-in-ear technology to pre-service teachers, developing discrete trial skills in pre-service and in-service teachers, and preparing teachers in the use of STEM-related instructional strategies.

 

 

 

 

 

This start-up uses virtual reality to get your kids excited about learning chemistry — from Lora Kolodny and Erin Black

  • MEL Science raised $2.2 million in venture funding to bring virtual reality chemistry lessons to schools in the U.S.
  • Eighty-two percent of science teachers surveyed in the U.S. believe virtual reality content can help their students master their subjects.

 

This start-up uses virtual reality to get your kids excited about learning chemistry from CNBC.

 

 


From DSC:
It will be interesting to see all the “places” we will be able to go and interact within — all from the comfort of our living rooms! Next generation simulators should be something else for teaching/learning & training-related purposes!!!

The next gen learning platform will likely offer such virtual reality-enabled learning experiences, along with voice recognition/translation services and a slew of other technologies — such as AI, blockchain*, chatbots, data mining/analytics, web-based learner profiles, an online-based marketplace supported by the work of learning-based free agents, and others — running in the background. All of these elements will work to offer us personalized, up-to-date learning experiences — helping each of us stay relevant in the marketplace as well as simply enabling us to enjoy learning about new things.

But the potentially disruptive piece of all of this is that this next generation learning platform could create an Amazon.com of what we now refer to as “higher education.”  It could just as easily serve as a platform for offering learning experiences for learners in K-12 as well as the corporate learning & development space.

 

I’m tracking these developments at:
http://danielschristian.com/thelivingclassroom/

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


*  Also see:


Blockchain, Bitcoin and the Tokenization of Learning — from edsurge.com by Sydney Johnson

Excerpt:

In 2014, Kings College in New York became the first university in the U.S. to accept Bitcoin for tuition payments, a move that seemed more of a PR stunt than the start of some new movement. Much has changed since then, including the value of Bitcoin itself, which skyrocketed to more than $19,000 earlier this month, catapulting cryptocurrencies into the mainstream.

A handful of other universities (and even preschools) now accept Bitcoin for tuition, but that’s hardly the extent of how blockchains and tokens are weaving their way into education: Educators and edtech entrepreneurs are now testing out everything from issuing degrees on the blockchain to paying people in cryptocurrency for their teaching.

 

 

 

 

The Beatriz Lab - A Journey through Alzheimer's Disease

This three-part lab can be experienced all at once or separately. At the beginning of each part, Beatriz’s brain acts as an omniscient narrator, helping learners understand how changes to the brain affect daily life and interactions.

Pre and post assessments, along with a facilitation guide, allow learners and instructors to see progression towards outcomes that are addressed through the story and content in the three parts, including:

1) increased knowledge of Alzheimer’s disease and the brain
2) enhanced confidence to care for people with Alzheimer’s disease
3) improvement in care practice

Why a lab about Alzheimer’s Disease?
The Beatriz Lab is very important to us at Embodied Labs. It is the experience that inspired the start of our company. We believe VR is more than a way to evoke feelings of empathy; rather, it is a powerful behavior change tool. By taking the perspective of Beatriz, healthcare professionals and trainees are empowered to better care for people with Alzheimer’s disease, leading to more effective care practices and better quality of life. Through embodying Beatriz, you will gain insight into life with Alzheimer’s and be able to better connect with and care for your loved ones, patients, clients, or others in this communities who live with the disease every day. In our embodied VR experience, we hope to portray both the difficult and joyful moments — the disease surely is a mix of both.

Watch our new promo video to learn more!

 

 

As part of the experience, you will take a 360 degree trip into Beatriz’s brain,
and visit a neuron “forest” that is being affected by amyloid beta plaques and tau proteins.

 

From DSC:
I love the work that Carrie Shaw and @embodiedLabs are doing! Thanks Carrie & Company!

 

 

 

Top 7 Business Collaboration Conference Apps in Virtual Reality (VR) — from vudream.com by Ved Pitre

Excerpt (emphasis DSC):

As VR continues to grow and improve, the experiences will feel more real. But for now, here are the best business conference applications in virtual reality.

 

 

 

Final Cut Pro X Arrives With 360 VR Video Editing — from vrscount.com by Jonathan Nafarrete

Excerpt:

A sign of how Apple is supporting VR in parts of its ecosystem, Final Cut Pro X (along with Motion and Compressor), now has a complete toolset that lets you import, edit, and deliver 360° video in both monoscopic and stereoscopic formats.

Final Cut Pro X 10.4 comes with a handful of slick new features that we tested, such as advanced color grading and support for High Dynamic Range (HDR) workflows. All useful features for creators, not just VR editors, especially since Final Cut Pro is used so heavily in industries like video editing and production. But up until today, VR post-production options have been minimal, with no support from major VR headsets. We’ve had options with Adobe Premiere plus plugins, but not everyone wants to be pigeon-holed into a single software option. And Final Cut Pro X runs butter smooth on the new iMac, so there’s that.

Now with the ability to create immersive 360° films right in Final Cut Pro, an entirely new group of creators have the ability to dive into the world of 360 VR video. Its simple and intuitive, something we expect from an Apple product. The 360 VR toolset just works.

 

 

 

See Original, Exclusive Star Wars Artwork in VR — from vrscount.com by Alice Bonasio

 

Excerpt:

HWAM’s first exhibition is a unique collection of Star Wars production pieces, including the very first drawings made for the film franchise and never-before-seen production art from the original trilogy by Lucasfilm alum Joe Johnston, Ralph McQuarrie, Phil Tippett, Drew Struzan, Colin Cantwell, and more.

 

 

 

Learning a language in VR is less embarrassing than IRL — from qz.com by Alice Bonasio

Excerpt:

Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?

VR is the ultimate medium for delivering what is known as “experiential learning.” This education theory is based on the idea that we learn and remember things much better when doing something ourselves than by merely watching someone else do it or being told about it.

The immersive nature of VR means users remember content they interact with in virtual scenarios much more vividly than with any other medium. (According to experiments carried out by professor Ann Schlosser at the University of Washington, VR even has the capacity to prompt the development of false memories.)

 

 

Since immersion is a key factor in helping students not only learn much faster but also retain what they learn for longer, these powers can be harnessed in teaching and training—and there is also research that indicates that VR is an ideal tool for learning a language.

 

 


Addendum on 12/20/17:

 


 

 

 

Want to learn a new language? With this AR app, just point & tap — from fastcodesign.com by Mark Wilson
A new demo shows how augmented reality could redefine apps as we know them.

Excerpt:

There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.

To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”

 

 

 

Also see:

Apple ARKit & Machine Learning Come Together Making Smarter Augmented Reality — from next.reality.news by Jason Odom

Excerpt:

The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.

Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.

 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 
 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

Introducing Deep Learning and Neural Networks — Deep Learning for Rookies — from medium.com by Nahua Kang

Excerpts:

Here’s a short list of general tasks that deep learning can perform in real situations:

  1. Identify faces (or more generally image categorization)
  2. Read handwritten digits and texts
  3. Recognize speech (no more transcribing interviews yourself)
  4. Translate languages
  5. Play computer games
  6. Control self-driving cars (and other types of robots)

And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!

Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):

  1. Michael Nielsen’s Neural Networks and Deep Learning
  2. Geoffrey Hinton’s Neural Networks for Machine Learning
  3. Goodfellow, Bengio, & Courville’s Deep Learning
  4. Ian Trask’s Grokking Deep Learning,
  5. Francois Chollet’s Deep Learning with Python
  6. Udacity’s Deep Learning Nanodegree (not free but high quality)
  7. Udemy’s Deep Learning A-Z ($10–$15)
  8. Stanford’s CS231n and CS224n
  9. Siraj Raval’s YouTube channel

The list goes on and on. David Venturi has a post for freeCodeCamp that lists many more resources. Check it out here.

 

 

 

 

 

When AI can transcribe everything — from theatlantic.com by Greg Noone
Tech companies are rapidly developing tools to save people from the drudgery of typing out conversations—and the impact could be profound.

Excerpt:

Despite the recent emergence of browser-based transcription aids, transcription’s an area of drudgery in the modern Western economy where machines can’t quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could.

Automatic speech recognition, or ASR, is an area that has gripped the firm’s chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland’s Edinburgh University. “I’d just left China,” he says, remembering the difficulty he had in using his undergraduate knowledge of the American English to parse the Scottish brogue of his lecturers. “I wished every lecturer and every professor, when they talked in the classroom, could have subtitles.”

“That’s the thing with transcription technology in general,” says Prenger. “Once the accuracy gets above a certain bar, everyone will probably start doing their transcriptions that way, at least for the first several rounds.” He predicts that, ultimately, automated transcription tools will increase both the supply of and the demand for transcripts. “There could be a virtuous circle where more people expect more of their audio that they produce to be transcribed, because it’s now cheaper and easier to get things transcribed quickly. And so, it becomes the standard to transcribe everything.”

 

 

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Australian start-up taps IBM Watson to launch language translation earpiece — from prnewswire.com
World’s first available independent translation earpiece, powered by AI to be in the hands of consumers by July

Excerpts:

SYDNEY, June 12, 2017 /PRNewswire/ — Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds, being the first of its kind to hit global markets next month.

Unveiled at last week’s United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland, the Translate One2One earpiece supports translations across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German and Chinese. Available to purchase today for delivery in July, the earpiece carries a price tag of $179 USD, and is the first independent translation device that doesn’t rely on Bluetooth or Wi-Fi connectivity.

 

Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds.

 

 

From DSC:
How much longer before this sort of technology gets integrated into videoconferencing and transcription tools that are used in online-based courses — enabling global learning at a scale never seen before? (Or perhaps NLP-based tools are already being integrated into global MOOCs and the like…not sure.) It would surely allow for us to learn from each other in a variety of societies throughout the globe.

 

 

 

The 82 Hottest EdTech Tools of 2017 According to Education Experts — from tutora.co.uk by Giorgio Cassella

Excerpt:

If you work in education, you’ll know there’s a HUGE array of applications, services, products and tools created to serve a multitude of functions in education.

Tools for teaching and learning, parent-teacher communication apps, lesson planning software, home-tutoring websites, revision blogs, SEN education information, professional development qualifications and more.

There are so many companies creating new products for education, though, that it can be difficult to keep up – especially with the massive volumes of planning and marking teachers have to do, never mind finding the time to actually teach!

So how do you know which ones are the best?

Well, as a team of people passionate about education and learning, we decided to do a bit of research to help you out.

We’ve asked some of the best and brightest in education for their opinions on the hottest EdTech of 2017. These guys are the real deal – experts in education, teaching and new tech from all over the world from England to India, to New York and San Francisco.

They’ve given us a list of 82 amazing, tried and tested tools…


From DSC:
The ones that I mentioned that Giorgio included in his excellent article were:

  • AdmitHub – Free, Expert College Admissions Advice
  • Labster – Empowering the Next Generation of Scientists to Change the World
  • Unimersiv – Virtual Reality Educational Experiences
  • Lifeliqe – Interactive 3D Models to Augment Classroom Learning

 


 

 

 

 

59 impressive things artificial intelligence can do today — from businessinsider.com by Ed Newton-Rex

Excerpt:

But what can AI do today? How close are we to that all-powerful machine intelligence? I wanted to know, but couldn’t find a list of AI’s achievements to date. So I decided to write one. What follows is an attempt at that list. It’s not comprehensive, but it contains links to some of the most impressive feats of machine intelligence around. Here’s what AI can do…

 

 

 


Recorded Saturday, February 25th, 2017 and published on Mar 16, 2017


Description:

Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could it present a threat to the very basis of human civilization? The future of artificial intelligence is up for debate, and the Origins Project is bringing together a distinguished panel of experts, intellectuals and public figures to discuss who’s in control. Eric Horvitz, Jaan Tallinn, Kathleen Fisher and Subbarao Kambhampati join Origins Project director Lawrence Krauss.

 

 

 

 

Description:
Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen. The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

 

 


(Below emphasis via DSC)

IBM and Ricoh have partnered for a cognitive-enabled interactive whiteboard which uses IBM’s Watson intelligence and voice technologies to support voice commands, taking notes and actions and even translating into other languages.

 

The Intelligent Workplace Solution leverages IBM Watson and Ricoh’s interactive whiteboards to allow to access features via using voice. It makes sure that Watson doesn’t just listen, but is an active meeting participant, using real-time analytics to help guide discussions.

Features of the new cognitive-enabled whiteboard solution include:

  • Global voice control of meetings: Once a meeting begins, any employee, whether in-person or located remotely in another country, can easily control what’s on the screen, including advancing slides, all through simple voice commands using Watson’s Natural Language API.
  • Translation of the meeting into another language: The Intelligent Workplace Solution can translate speakers’ words into several other languages and display them on screen or in transcript.
  • Easy-to-join meetings: With the swipe of a badge the Intelligent Workplace Solution can log attendance and track key agenda items to ensure all key topics are discussed.
  • Ability to capture side discussions: During a meeting, team members can also hold side conversations that are displayed on the same whiteboard.

 


From DSC:

Holy smokes!

If you combine the technologies that Ricoh and IBM are using with their new cognitive-enabled interactive whiteboard with what Bluescape is doing — by providing 160 acres of digital workspace that’s used to foster collaboration (and to do so whether you are working remoting or working with others in the same physical space) — and you have one incredibly powerful platform! 

#NLP  |  #AI  |  #CognitiveComputing  | #SmartClassrooms
#LearningSpaces  |#Collaboration |  #Meetings 

 

 


 

 

 


 

AI Market to Grow 47.5% Over Next Four Years — from campustechnology.com by Richard Chang

Excerpt:

The artificial intelligence (AI) market in the United States education sector is expected to grow at a compound annual growth rate of 47.5 percent during the period 2017-2021, according to a new report by market research firm Research and Markets.

 

 

Amazon deepens university ties in artificial intelligence race — from by Jeffrey Dastin

Excerpt:

Amazon.com Inc has launched a new program to help students build capabilities into its voice-controlled assistant Alexa, the company told Reuters, the latest move by a technology firm to nurture ideas and talent in artificial intelligence research.

Amazon, Alphabet Inc’s Google and others are locked in a race to develop and monetize artificial intelligence. Unlike some rivals, Amazon has made it easy for third-party developers to create skills for Alexa so it can get better faster – a tactic it now is extending to the classroom.

 

 

The WebMD skill for Amazon’s Alexa can answer all your medical questions — from digitaltrends.com by Kyle Wiggers
WebMD is bringing its wealth of medical knowledge to a new form factor: Amazon’s Alexa voice assistant.

Excerpt:

Alexa, Amazon’s brilliant voice-activated smart assistant, is a capable little companion. It can order a pizza, summon a car, dictate a text message, and flick on your downstairs living room’s smart bulb. But what it couldn’t do until today was tell you whether that throbbing lump on your forearm was something that required medical attention. Fortunately, that changed on Tuesday with the introduction of a WebMD skill that puts the service’s medical knowledge at your fingertips.

 

 


Addendum:

  • How artificial intelligence is taking Asia by storm — from techwireasia.com by Samantha Cheh
    Excerpt:
    Lately it seems as if everyone is jumping onto the artificial intelligence bandwagon. Everyone, from ride-sharing service Uber to Amazon’s logistics branch, is banking on AI being the next frontier in technological innovation, and are investing heavily in the industry.

    That’s likely truest in Asia, where the manufacturing engine which drove China’s growth is now turning its focus to plumbing the AI mine for gold.

    Despite Asia’s relatively low overall investment in AI, the industry is set to grow. Fifty percent of respondents in KPMG’s AI report said their companies had plans to invest in AI or robotic technology.

    Investment in AI is set to drive venture capital investment in China in 2017. Tak Lo, of Hong Kong’s Zeroth, notes there are more mentions of AI in Chinese research papers than there are in the US.

    China, Korea and Japan collectively account for nearly half the planet’s shipments of articulated robots in the world.

     

 

Artificial Intelligence – Research Areas

 

 

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian