How to Set Up a VR Pilot — from by Dian Schaffhauser
As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.


The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.

The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.




The Future of Language Learning: Augmented Reality vs Virtual Reality — from by Denis Hurley


Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.

While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.

I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.



Bose has a plan to make AR glasses — from by David Carnoy
Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.


“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”

The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”

Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”


Bose is making AR glasses focused on audio, not visuals

Here are some examples Bose gave for how it might be used:

    • For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
    • Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
  • With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
  • Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.



The 10 Best VR Apps for Classrooms Using Merge VR’s New Merge Cube — from


Google Lens arrives on iOS — from by Sarah Perez


On the heels of last week’s rollout on Android, Google’s  new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.

For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.


The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.



15 Top Augmented Reality (AR) Apps Changing Education — from by Steven Wesley




CNN VR App Brings News to Oculus Rift — from by Jonathan Nafarrete





Embracing Digital Tools of the Millennial Trade. — from


Thus, millennials are well-acquainted with – if not highly dependent on – the digital tools they use in their personal and professional lives. Tools that empower them to connect and collaborate in a way that is immediate and efficient, interactive and self-directed. Which is why they expect technology-enhanced education to replicate this user experience in the virtual classroom. And when their expectations fall short or go unmet altogether, millennials are more likely to go in search of other alternatives.



From DSC:
There are several solid tools mentioned in this article, and I always appreciate the high-level of innovation arising from Susan Aldridge, Marci Powell, and the folks at

After reading the article, the key considerations that come to my mind involve the topics of usability and advocating for the students’ perspective. That is, we need to approach things from the student’s/learner’s standpoint — from a usability and user experience standpoint. For example, a seamless/single sign-on for each of these tools would be a requirement for implementing them. Otherwise, learners would have to be constantly logging into a variety of systems and services. Not only is that process time consuming, but a learner would need to keep track of additional passwords — and who doesn’t have enough of those to keep track of these days (I realize there are tools for that, but even those tools require additional time to investigate, setup, and maintain).

So plug-ins for the various CMSs/LMSs are needed that allow for a nice plug-and-play situation here.



From DSC:
Why aren’t we further along with lecture recording within K-12 classrooms?

That is, I as a parent — or much better yet, our kids themselves who are still in K-12 — should be able to go online and access whatever talks/lectures/presentations were given on a particular day. When our daughter is sick and misses several days, wouldn’t it be great for her to be able to go out and see what she missed? Even if we had the time and/or the energy to do so (which we don’t), my wife and I can’t present this content to her very well. We would likely explain things differently — and perhaps incorrectly — thus, potentially muddying the waters and causing more confusion for our daughter.

There should be entry level recording studios — such as the One Button Studio from Penn State University — in each K-12 school for teachers to record their presentations. At the end of each day, the teacher could put a checkbox next to what he/she was able to cover that day. (No rushing intended here — as education is enough of a run-away train often times!) That material would then be made visible/available on that day as links on an online-based calendar. Administrators should pay teachers extra money in the summer times to record these presentations.

Also, students could use these studios to practice their presentation and communication skills. The process is quick and easy:





I’d like to see an option — ideally via a brief voice-driven Q&A at the start of each session — that would ask the person where they wanted to put the recording when it was done: To a thumb drive, to a previously assigned storage area out on the cloud/Internet, or to both destinations?

Providing automatically generated close captioning would be a great feature here as well, especially for English as a Second Language (ESL) students.




From DSC:
After seeing the article entitled, “Scientists Are Turning Alexa into an Automated Lab Helper,” I began to wonder…might Alexa be a tool to periodically schedule & provide practice tests & distributed practice on content? In the future, will there be “learning bots” that a learner can employ to do such self-testing and/or distributed practice?



From page 45 of the PDF available here:


Might Alexa be a tool to periodically schedule/provide practice tests & distributed practice on content?




Scientists Are Turning Alexa into an Automated Lab Helper — from by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.


Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…


Also see:




“Rise of the machines” — from January 2018 edition of InAVate magazine
AI is generating lots of buzz in other verticals, but what can AV learn from those? Tim Kridel reports.



From DSC:
Learning spaces are relevant as well in the discussion of AI and AV-related items.


Also in their January 2018 edition, see
an incredibly detailed project at the London Business School.


A full-width frosted glass panel sits on the desk surface, above it fixed in the ceiling is a Wolfvision VZ-C12 visualiser. This means the teaching staff can write on the (wipeclean) surface and the text appears directly on two 94-in screens behind them, using Christie short-throw laser 4,000 lumens projectors. When the lecturer is finished or has filled up the screen with text, the image can be saved on the intranet or via USB. Simply wipe with a cloth and start again. Not only is the technology inventive, but it allows the teaching staff to remain in face-to-face contact with the students at all times, instead of students having to stare at the back of the lecturer’s head whilst they write.



Also relevant, see:




Alexa, how can you improve teaching and learning? — from by Kate Roddy with thanks to eduwire for their post on this
Special report:? Voice command platforms from Amazon, Google and Microsoft are creating new models for learning in K-12 and higher education — and renewed privacy concerns.


We’ve all seen the commercials: “Alexa, is it going to rain today?” “Hey, Google, turn up the volume.” Consumers across the globe are finding increased utility in voice command technology in their homes. But dimming lights and reciting weather forecasts aren’t the only ways these devices are being put to work.

Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.

The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses to foster greater social connection.

Like many technologies, the emerging presence of voice command devices in classrooms and at universities is also raising concerns about student privacy and unnatural dependence on digital tools. Yet, many educators interviewed for this report said the rise of voice command technology in education is inevitable — and welcome.

“One example,” he said, “is how voice dictation helped a student with dysgraphia. Putting the pencil and paper in front of him, even typing on a keyboard, created difficulties for him. So, when he’s able to speak to the device and see his words on the screen, the connection becomes that much more real to him.”

The use of voice dictation has also been beneficial for students without disabilities, Miller added. Through voice recognition technology, students at Taos Academy Charter School are able to perceive communication from a completely new medium.




From DSC:
After reviewing the article below, I wondered...if we need to interact with content to learn it…how might mixed reality allow for new ways of interacting with such content? This is especially intriguing when we interact with that content with others as well (i.e., social learning).

Perhaps Mixed Reality (MR) will bring forth a major expansion of how we look at “blended learning” and “hybrid learning.”


Mixed Reality Will Transform Perceptions — from by Alexandro Pando

Excerpts (emphasis DSC):

Changing How We Perceive The World One Industry At A Time
Part of the reason mixed reality has garnered this momentum within such short span of time is that it promises to revolutionize how we perceive the world without necessarily altering our natural perspective. While VR/AR invites you into their somewhat complex worlds, mixed reality analyzes the surrounding real-world environment before projecting an enhanced and interactive overlay. It essentially “mixes” our reality with a digitally generated graphical information.

All this, however, pales in comparison to the impact of mixed reality on the storytelling process. While present technologies deliver content in a one-directional manner, from storyteller to audience, mixed reality allows for delivery of content, then interaction between content, creator and other users. This mechanism cultivates a fertile ground for increased contact between all participating entities, ergo fostering the creation of shared experiences. Mixed reality also reinvents the storytelling process. By merging the storyline with reality, viewers are presented with a wholesome experience that’s perpetually indistinguishable from real life.


Mixed reality is without a doubt going to play a major role in shaping our realities in the near future, not just because of its numerous use cases but also because it is the flag bearer of all virtualized technologies. It combines VR, AR and other relevant technologies to deliver a potent cocktail of digital excellence.





The Section 508 Refresh and What It Means for Higher Education — from by Martin LaGrow

Excerpts (emphasis DSC):

Higher education should now be on notice: Anyone with an Internet connection can now file a complaint or civil lawsuit, not just students with disabilities. And though Section 508 was previously unclear as to the expectations for accessibility, the updated requirements add specific web standards to adhere to — specifically, the Web Content Accessibility Guidelines (WCAG) 2.0 level AA developed by the World Wide Web Consortium (W3C).

Although WCAG 2.0 has been around since the early 2000s, it was developed by web content providers as a self-regulating tool to create uniformity for web standards around the globe. It was understood to be best practices but was not enforced by any regulating agency. The Section 508 refresh due in January 2018 changes this, as WCAG 2.0 level AA has been adopted as the standard of expected accessibility. Thus, all organizations subject to Section 508, including colleges and universities, that create and publish digital content — web pages, documents, images, videos, audio — must ensure that they know and understand these standards.

Reacting to the Section 508 Refresh
In a few months, the revised Section 508 standards become enforceable law. As stated, this should not be considered a threat or burden but rather an opportunity for institutions to check their present level of commitment and adherence to accessibility. In order to prepare for the update in standards, a number of proactive steps can easily be taken:

  • Contract a third-party expert partner to review institutional accessibility policies and practices and craft a long-term plan to ensure compliance.
  • Review all public-facing websites and electronic documents to ensure compliance with WCAG 2.0 Level AA standards.
  • Develop and publish a policy to state the level of commitment and adherence to Section 508 and WCAG 2.0 Level AA.
  • Create an accessibility training plan for all individuals responsible for creating and publishing electronic content.
  • Ensure all ICT contracts, ROIs, and purchases include provisions for accessibility.
  • Inform students of their rights related to accessibility, as well as where to address concerns internally. Then support the students with timely resolutions.

As always, remember that the pursuit of accessibility demonstrates a spirit of inclusiveness that benefits everyone. Embracing the challenge to meet the needs of all students is a noble pursuit, but it’s not just an adoption of policy. It’s a creation of awareness, an awareness that fosters a healthy shift in culture. When this is the approach, the motivation to support all students drives every conversation, and the fear of legal repercussions becomes secondary. This should be the goal of every institution of learning.



Als0 see:

How to Make Accessibility Part of the Landscape — from by Mark Lieberman
A small institution in Vermont caters to students with disabilities by letting them choose the technology that suits their needs.


Accessibility remains one of the key issues for digital learning professionals looking to catch up to the needs of the modern student. At last month’s Online Learning Consortium Accelerate conference, seemingly everyone in attendance hoped to come away with new insights into this thorny concern.

Landmark College in Vermont might offer some guidance. The private institution with approximately 450 students exclusively serves students with diagnosed learning disabilities, attention disorders or autism. Like all institutions, it’s still grappling with how best to serve students in the digital age, whether in the classroom or at a distance. Here’s a glimpse at the institution’s philosophy, courtesy of Manju Banerjee, Landmark’s vice president for educational research and innovation since 2011.



Amazon and Codecademy team up for free Alexa skills training — from by Khari Johnson


Amazon and tech training app Codecademy have collaborated to create a series of free courses. Available today, the courses are meant to train developers as well as beginners how to create skills, the voice apps that interact with Alexa.

Since opening Alexa to third-party developers in 2015, more than 20,000 skills have been made available in the Alexa Skills Store.





Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian