Reflections on “Inside Amazon’s artificial intelligence flywheel” [Levy]

Inside Amazon’s artificial intelligence flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt (emphasis DSC):

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

 

 

From DSC:
When will we begin to see more mainstream recommendation engines for learning-based materials? With the demand for people to reinvent themselves, such a next generation learning platform can’t come soon enough!

  • Turning over control to learners to create/enhance their own web-based learner profiles; and allowing people to say who can access their learning profiles.
  • AI-based recommendation engines to help people identify curated, effective digital playlists for what they want to learn about.
  • Voice-driven interfaces.
  • Matching employees to employers.
  • Matching one’s learning preferences (not styles) with the content being presented as one piece of a personalized learning experience.
  • From cradle to grave. Lifelong learning.
  • Multimedia-based, interactive content.
  • Asynchronously and synchronously connecting with others learning about the same content.
  • Online-based tutoring/assistance; remote assistance.
  • Reinvent. Staying relevant. Surviving.
  • Competency-based learning.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

 

 

We’re about to embark on a period in American history where career reinvention will be critical, perhaps more so than it’s ever been before. In the next decade, as many as 50 million American workers—a third of the total—will need to change careers, according to McKinsey Global Institute. Automation, in the form of AI (artificial intelligence) and RPA (robotic process automation), is the primary driver. McKinsey observes: “There are few precedents in which societies have successfully retrained such large numbers of people.”

Bill Triant and Ryan Craig

 

 

 

Also relevant/see:

Online education’s expansion continues in higher ed with a focus on tech skills — from educationdive.com by James Paterson

Dive Brief:

  • Online learning continues to expand in higher ed with the addition of several online master’s degrees and a new for-profit college that offers a hybrid of vocational training and liberal arts curriculum online.
  • Inside Higher Ed reported the nonprofit learning provider edX is offering nine master’s degrees through five U.S. universities — the Georgia Institute of Technology, the University of Texas at Austin, Indiana University, Arizona State University and the University of California, San Diego. The programs include cybersecurity, data science, analytics, computer science and marketing, and they cost from around $10,000 to $22,000. Most offer stackable certificates, helping students who change their educational trajectory.
  • Former Harvard University Dean of Social Science Stephen Kosslyn, meanwhile, will open Foundry College in January. The for-profit, two-year program targets adult learners who want to upskill, and it includes training in soft skills such as critical thinking and problem solving. Students will pay about $1,000 per course, though the college is waiving tuition for its first cohort.

 

 

 
 

Blackboard, Apple mobile student ID has arrived — from cr80news.com by Andrew Hudson
Mobile Credential officially goes live at launch campuses

Excerpt:

We’ve officially reached the kickoff of Blackboard’s long-standing vision for the mobile student ID. Starting today on the campuses of the University of Alabama, Duke University and the University of Oklahoma, Blackboard with the aid of Apple is enabling students to use mobile credentials everywhere their plastic ID card was previously accepted.

[On 10/2/18], for the first time, iPhones and Apple Watches are enabling users to navigate the full range of transactions both on and off campus. At these three launch institutions, students can add their official student ID card to Apple Wallet to make purchases, authenticate for privileges, as well as enable physical access to dorms, rec centers, libraries and academic buildings.

 

 

 

Skype chats are coming to Alexa devices — from engadget.com by Richard Lawlor
Voice controlled internet calls to or from any device with Amazon’s system in it.

Excerpt:

Aside from all of the Alexa-connected hardware, there’s one more big development coming for Amazon’s technology: integration with Skype. Microsoft and Amazon said that voice and video calls via the service will come to Alexa devices (including Microsoft’s Xbox One) with calls that you can start and control just by voice.

 

 

Amazon Hardware Event 2018
From techcrunch.com

 

Echo HomePod? Amazon wants you to build your own — by Brian Heater
One of the bigger surprises at today’s big Amazon event was something the company didn’t announce. After a couple of years of speculation that the company was working on its own version of the Home…

 

 

The long list of new Alexa devices Amazon announced at its hardware event — by Everyone’s favorite trillion-dollar retailer hosted a private event today where they continued to…

 

Amazon introduces APL, a new design language for building Alexa skills for devices with screensAlong with the launch of the all-new Echo Show, the Alexa-powered device with a screen, Amazon also introduced a new design language for developers who want to build voice skills that include multimedia…

Excerpt:

Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.

 

From DSC:
This is a great move by Amazon — as NLP and our voices become increasingly important in how we “drive” and utilize our computing devices.

 

 

Amazon launches an Echo Wall Clock, because Alexa is gonna be everywhere — by Sarah Perez

 

 

Amazon’s new Echo lineup targets Google, Apple and Sonos — from engadget.com by Nicole Lee
Alexa, dominate the industry.

The business plan from here is clear: Companies pay a premium to be activated when users pose questions related to their products and services. “How do you cook an egg?” could pull up a Food Network tutorial; “How far is Morocco?” could enable the Expedia app.
Also see how Alexa might be a key piece of smart classrooms in the future:
 

Can we design online learning platforms that feel more intimate than massive? — from edsurge.com by Amy Ahearn

Excerpt:

This presents a challenge and an opportunity: How can we design online learning environments that achieve scale and intimacy? How do we make digital platforms feel as inviting as well-designed physical classrooms?

The answer may be that we need to balance massiveness with miniaturization. If the first wave of MOOCs was about granting unprecedented numbers of students access to high-quality teaching and learning materials, Wave 2 needs to focus on creating a sense of intimacy within that massiveness.

We need to be building platforms that look less like a cavernous stadium and more like a honeycomb. This means giving people small chambers of engagement where they can interact with a smaller, more manageable and yet still diverse groups. We can’t meaningfully listen to the deafening roar of the internet. But we can learn from a collection of people with perspectives different than ours.

 

 

What will it take to get MOOC platforms to begin to offer learning spaces that feel more inviting and intimate? Perhaps there’s a new role that needs to emerge in the online learning ecosystem: a “learning architect” who sits between the engineers and the instructional designers.

 

 

 

 

 

 

Instructional Design in Higher Education: Defining an Evolving Field
From OLC Outlook: An Environmental Scan of the Digital Learning Landscape
Elaine Beirne (Dublin City University) and Matthew Romanoski (The University of Arizona)
July 2018

Description:
This white paper provides an overview of the growing field of instructional design in higher education, from why the field is growing to how designers are functioning in their role. It explores why there is a growing demand for designers, who is filling these roles, what the responsibilities of designers are, and how instructional designers are addressing the challenges they face.

From onlinelearningconsortium.org

 

Report: Accessibility in Digital Learning Increasingly Complex — from campustechnology.com by Dian Schaffhauser

Excerpt:

The Online Learning Consortium (OLC)has introduced a series of original reports to keep people in education up-to-date on the latest developments in the field of digital learning. The first report covers accessibility and addresses both K-12 and higher education. The series is being produced by OLC’s Research Center for Digital Learning & Leadership.

The initial report addresses four broad areas tied to accessibility:

  • The national laws governing disability and access and how they apply to online courses;
  • What legal cases exist to guide online course design and delivery in various educational settings;
  • The issues that emerge regarding online course access that might be unique to higher ed or to K-12, and which ones might be shared; and
  • What support online course designers need to generate accessible courses for learners across the education life span (from K-12 to higher education).

 

 

An AI Bot for the Teacher — with thanks to Karthik Reddy for this resource

Artificial intelligence is the stuff of science fiction – if you are old enough, you will remember those Terminator movies a good few years ago, where mankind was systematically being wiped out by computers.

The truth is that AI, though not quite at Terminator level yet, is already a fact and something that most of us have encountered already. If you have ever used the virtual assistant on your phone or the Ask Google feature, you have used AI.

Some companies are using it as part of their sales and marketing strategies. An interesting example is Lowe’s Home Improvement that, instead of chatbots, uses actual robots into their physical stores. These robots are capable of helping customers locate products that they’re interested in, taking a lot of the guesswork out of the entire shopping experience.

Of course, there are a lot of different potential applications for AI that are very interesting. Imagine an AI teaching assistant, for example. They could help grade papers, fact check and assist with lesson planning, etc., all to make our harassed teachers’ lives a little easier.

Chatbots could be programmed as tutors to help kids better understand core topics if they are struggling with them, ensuring that they don’t hold the rest of the class up. And, for kids who have a real affinity with the subject, help them learn more about what they are interested in.

It could also help enhance long distance training.  Imagine if your students could get instant answers to basic questions through a simple chatbot. Sure, if they were still not getting it, they would come through to you – the chatbot cannot replace a real, live, teacher after all. But it could save you a lot of time and frustration.

Here, of course, we have only skimmed the surface of what artificial intelligence is capable of. Why not look through this infographic to see how different brands have been using this tech, and see what possible applications of it we might expect.

 

Brands that use AI to enhance marketing (infographic) 2018
From 16best.net with thanks to Karthik Reddy for this resource

 

 

 

 

Click on the image to get a larger image in a PDF file format.

 


From DSC:
So regardless of what was being displayed up on any given screen at the time, once a learner was invited to use their devices to share information, a graphical layer would appear on the learner’s mobile device — as well as up on the image of the screens (but the actual images being projected on the screens would be shown in the background in a muted/pulled back/25% opacity layer so the code would “pop” visually-speaking) — letting him or her know what code to enter in order to wirelessly share their content up to a particular screen. This could be extra helpful when you have multiple screens in a room.

For folks at Microsoft: I could have said Mixed Reality here as well.


 

#ActiveLearning #AR #MR #IoT #AV #EdTech #M2M #MobileApps
#Sensors #Crestron #Extron #Projection #Epson #SharingContent #Wireless

 

 

A Sneak Peek into Augmented Reality’s Influence on SEO — from semrush.com by Pradeep Chopra

Excerpt:

AR is here to influence how businesses are going to promote their products/services and also how they optimize for search rankings. It is important to note, AR will impact Search Engine Optimization.

Local SEO Becomes More Critical
Augmented reality makes it possible for users to scan their mobile devices and get information on the businesses in their area. The data includes everything from images to ratings to reviews. AR apps have the capability to provide users with location-specific offers and deals – all in a theatrical AR format.

Apps like Yelp and Wikitude are already providing geo-location based AR experiences.

So, if you were to scan a location with your camera, you would be able to see the details of that business along with its latest reviews, ratings, and offers. This will simplify the experience for those searching from a specific geo-location. You must, therefore, ensure and maintain the quality and freshness of your local listings.

Here are some key aspects that you must take care of…

 

 

 

A new JPEG format for virtual reality, drones and self-driving cars — from actu.epfl.ch
The Joint Photographic Experts Group (JPEG), an international committee headed by an EPFL professor, has just unveiled JPEG XS. With this new format, the image-compression process uses less energy, and higher-quality images can be sent with low latency over broadband networks like 5G. JPEG XS will have applications in areas such as virtual reality, augmented reality, space imagery, self-driving cars and professional movie editing.

Excerpt:

Why do virtual reality headsets make users nauseous? One reason is latency, or the almost imperceptible amount of time it takes for a display image to change in response to a user’s head movement. However, the Joint Photographic Experts Group (JPEG) has just introduced a new image compression standard that could resolve this problem. This working group is headed by Touradj Ebrahimi, a professor in EPFL’s School of Engineering (STI).

With JPEG XS, images and videos maintain an extremely high level of quality thanks to a compression process that is simpler and faster – and thus more energy efficient. The compressed files end up being larger, but that’s not a problem thanks to broadband networks such as Wi-Fi and 5G: the aim is to stream the files instead of storing them in smartphones or other devices with limited memory.

This means that you could use your smartphone, tablet or computer to project a high-definition movie or a video game onto a large-screen display almost instantaneously. No cables would be required, and the image quality would be extremely high.

 

 

JPEG XS is a new VR video streaming format optimized for 5G and Wi-Fi — from venturebeat.com by Jeremy Horwitz

Excerpt:

Best known for its eponymous and ubiquitous photo standard, the Joint Photographic Experts Group (JPEG) has announced JPEG XS, a new video compression standard designed to stream lossless videos, VR content, and games over wireless networks. Intriguingly, JPEG XS is said to work on current computers with only software updates, while smaller devices will require “next generation” hardware.

Unlike rival video standards, JPEG XS doesn’t attempt maximum compression by using extra processing power or time. It instead presumes that the device will be used on a high-bandwidth 5G cellular or Wi-Fi network and focuses on delivering ultra low latency and superior energy efficiency.

 

 

 

Apple’s In-Depth Work on a Next-Gen Mixed Reality Headset is Simply Mind Boggling in Scope — from patentlyapple.com

Excerpt:

In April 2017 Patently Apple posted a report titled “NASA’s Mission Operations Innovation Lead is now a Senior Manager on Apple’s AR Glasses Team.” A year ago we also posted a report titled “Apple’s Augmented Reality Team is bringing in more Specialists to work on their Future Platform.” Apple has certainly gathered a world class team of experts to develop a whole range of next-gen AR/VR and Mixed Reality headsets, smartglasses and more. Earlier today we posted a report titled “Apple Advances their Head Mounted Display Project by adding a new GUI, an External Camera, Gaming & more.” While Apple has been updating some of the features of this headset, we’re still stuck with a 2008 patent image a headset concept that is somewhat outdated.

 

 

Augmented Reality Kit: Quick Start Guide — from cgcookie.com by Jonathan Gonzalez

Excerpt:

Augmented Reality is an exciting new way to develop games and apps that support the use of 3d objects in real world space. If you’ve ever played Pokemon Go then you’re familiar with what Augmented Reality (AR) is. Other popular apps have been sprouting up to take use of AR capabilities for more practical purposes such as Ikea’s catalog, pick your furniture and see how it looks in your place. Regardless of how you use AR for development there are three main resources we can use to develop for various AR capable hardware.

 

 

 

Travel to Mars and learn about the Curiosity Rover in VR  — from unimersiv.com

 

 

 

 

 

From DSC:
This application looks to be very well done and thought out! Wow!

Check out the video entitled “Interactive Ink – Enables digital handwriting — and you may also wonder whether this could be a great medium/method of having to “write things down” for better information processing in our minds, while also producing digital work for easier distribution and sharing!

Wow!  Talk about solid user experience design and interface design! Nicely done.

 

 

Below is an excerpt of the information from Bella Pietsch from anthonyBarnum Public Relations

Imagine a world where users interact with their digital devices seamlessly, and don’t suffer from lag and delayed response time. I work with MyScript, a company whose Interactive Ink tech creates that world of seamless handwritten interactivity by combining the flexibility of pen and paper with the power and productivity of digital processing.

According to a recent forecast, the global handwriting recognition market is valued at a trillion-plus dollars and is expected to grow at an almost 16 percent compound annual growth rate by 2025. To add additional context, the new affordable iPad with stylus support was just released, allowing users to work with the $99 Apple Pencil, which was previously only supported by the iPad Pro.

Check out the demo of Interactive Ink using an Apple Pencil, Microsoft Surface Pen, Samsung S Pen or Google Pixelbook Pen here.

Interactive Ink’s proficiencies are the future of writing and equating. Developed by MyScript Labs, Interactive Ink is a form of digital ink technology which allows ink editing via simple gestures and providing device reflow flexibility. Interactive Ink relies on real-time predictive handwriting recognition, driven by artificial intelligence and neural network architectures.

 

 

 

 

Design Thinking: A Quick Overview — from interaction-design.org by Rikke Dam and Teo Siang

Excerpt:

To begin, let’s have a quick overview of the fundamental principles behind Design Thinking:

  • Design Thinking starts with empathy, a deep human focus, in order to gain insights which may reveal new and unexplored ways of seeing, and courses of action to follow in bringing about preferred situations for business and society.
  • It involves reframing the perceived problem or challenge at hand, and gaining perspectives, which allow a more holistic look at the path towards these preferred situations.
  • It encourages collaborative, multi-disciplinary teamwork to leverage the skills, personalities and thinking styles of many in order to solve multifaceted problems.
  • It initially employs divergent styles of thinking to explore as many possibilities, deferring judgment and creating an open ideations space to allow for the maximum number of ideas and points of view to surface.
  • It later employs convergent styles of thinking to isolate potential solution streams, combining and refining insights and more mature ideas, which pave a path forward.
  • It engages in early exploration of selected ideas, rapidly modelling potential solutions to encourage learning while doing, and allow for gaining additional insight into the viability of solutions before too much time or money has been spent
  • Tests the prototypes which survive the processes further to remove any potential issues.
  • Iterates through the various stages, revisiting empathetic frames of mind and then redefining the challenge as new knowledge and insight is gained along the way.
  • It starts off chaotic and cloudy steamrolling towards points of clarity until a desirable, feasible and viable solution emerges.

 

 

From DSC:
This post includes information about popular design thinking frameworks. I think it’s a helpful posting for those who have heard about design thinking but want to know more about it.

 

 

What is Design Thinking?
Design thinking is an iterative process in which we seek to understand the user, challenge assumptions we might have, and redefine problems in an attempt to identify alternative strategies and solutions that might not be instantly apparent with our initial level of understanding. As such, design thinking is most useful in tackling problems that are ill-defined or unknown.

Design thinking is extremely useful in tackling ill-defined or unknown problems—it reframes the problem in human-centric ways, allows the creation of many ideas in brainstorming sessions, and lets us adopt a hands-on approach in prototyping and testing. Design thinking also involves on-going experimentation: sketching, prototyping, testing, and trying out concepts and ideas. It involves five phases: Empathize, Define, Ideate, Prototype, and Test. The phases allow us to gain a deep understanding of users, critically examine the assumptions about the problem and define a concrete problem statement, generate ideas for tackling the problem, and then create prototypes for the ideas in order to test their effectiveness.

Design thinking is not about graphic design but rather about solving problems through the use of design. It is a critical skill for allprofessionals, not only designers. Understanding how to approach problems and apply design thinking enables everyone to maximize our contributions in the work environment and create incredible, memorable products for users.

 

 

 

 

Apple Special Event. March 27, 2018.
From Lane Tech College Prep High School, Chicago.

 

 

 

From DSC:
While it was great to see more Augmented Reality (AR) apps in education and to see Apple putting more emphasis again on educational-related endeavors, I’m doubtful that the iPad is going to be able to dethrone the Chromebooks. Apple might have better results with a system that can be both a tablet device and a laptop and let students decide which device to use and when. 

 


Also see:


 

Here are the biggest announcements from Apple’s education event — from engadget.com by Chris Velazco
The new iPad was only the beginning.

 

 

Apple’s Education-Focused iPad Event Pushes Augmented Reality Further into the Classroom — from next.reality.news by Tommy Palladino

Excerpt:

At Apple’s education event in Chicago on Tuesday, augmented reality stood at the head of the class among the tech giant’s new offerings for the classroom.

The company showcased a number of ARKit-enabled apps that promise to make learning more immersive. For example, the AR mode for Froggipedia, expected to launch on March 30 for $3.99 on the App Store, will allow students to view and dissect a virtual frog’s anatomy. And a new update to the GeoGebra app brings ARKit support to math lessons.

Meanwhile, the Free Rivers app from the World Wildlife Federation enables students to explore miniature landscapes and learn about various ecosystems around the world.

In addition, as part of its Everyone Can Code program, Apple has also updated its Swift Playgrounds coding app with ARKit support, enabling students to begin learning to code via an ARKit module, according to a report from The Verge.

 

 

 

 

 

Apple’s Education Page…which covers what was announced (at least as of 3/30/18)

 

 

Comparing Apple, Google and Microsoft’s education plays — from techcrunch.com by Brian Heater

Excerpt:

[The 3/27/18] Apple event in Chicago was about more than just showing off new hardware and software in the classroom — the company was reasserting itself as a major player in education. The category has long been a lynchpin in Apple’s strategy — something that Steve Jobs held near and dear.

Any ’80s kid will tell you that Apple was a force to be reckoned with — Apple computers were mainstays in computer labs across the country. It’s always been a good fit for a company focused on serving creators, bringing that extra bit of pizzazz to the classroom. In recent years, however, there’s been a major shift. The Chromebook has become the king of the classroom, thanks in no small part to the inexpensive hardware and limited spec requirements.

Based on Google’s early positioning of the category, it appears that the Chromebook’s classroom success even managed to catch its creators off-guard. The company has since happily embraced that success — while Microsoft appears to have shifted its own approach in response to Chrome OS’s success.

Apple’s own responses have been less direct, and today’s event was a reconfirmation of the company’s commitment to the iPad as the centerpiece of its educational play. If Apple can be seen as reacting, it’s in the price of the product. Gone are the days that schools’ entire digital strategy revolved around a bunch of stationary desktops in a dusty old computer lab.

 

 

Apple Should Have Cut iPad Price Further For Schools, Say Analysts
Apple announced another affordable iPad and some cool new educational software today, but it might be too pricey to unseat Chromebooks in many classrooms.

 

 

Apple’s New Low-End iPad For Students Looks To Thwart Google, Microsoft

 

 

What educators think about Apple’s new iPad
Can a bunch of new apps make up for the high price?

 

 

Apple needs more than apps to win over educators
Apple used to be in a lot of classrooms, but are new iPads enough to woo educators?

 

 

 

 

 

 

How to Set Up a VR Pilot — from campustechnology.com by Dian Schaffhauser
As Washington & Lee University has found, there is no best approach for introducing virtual reality into your classrooms — just stages of faculty commitment.

Excerpt:

The work at the IQ Center offers a model for how other institutions might want to approach their own VR experimentation. The secret to success, suggested IQ Center Coordinator David Pfaff, “is to not be afraid to develop your own stuff” — in other words, diving right in. But first, there’s dipping a toe.

The IQ Center is a collaborative workspace housed in the science building but providing services to “departments all over campus,” said Pfaff. The facilities include three labs: one loaded with high-performance workstations, another decked out for 3D visualization and a third packed with physical/mechanical equipment, including 3D printers, a laser cutter and a motion-capture system.

 

 

 

The Future of Language Learning: Augmented Reality vs Virtual Reality — from medium.com by Denis Hurley

Excerpts:

Here, I would like to stick to the challenges and opportunities presented by augmented reality and virtual reality for language learning.

While the challenge is a significant one, I am more optimistic than most that wearable AR will be available and popular soon. We don’t yet know how Snap Spectacles will evolve, and, of course, there’s always Apple.

I suspect we will see a flurry of new VR apps from language learning startups soon, especially from Duolingo and in combination with their AI chat bots. I am curious if users will quickly abandon the isolating experiences or become dedicated users.

 

 

Bose has a plan to make AR glasses — from cnet.com by David Carnoy
Best known for its speakers and headphones, the company has created a $50 million development fund to back a new AR platform that’s all about audio.

Excerpts:

“Unlike other augmented reality products and platforms, Bose AR doesn’t change what you see, but knows what you’re looking at — without an integrated lens or phone camera,” Bose said. “And rather than superimposing visual objects on the real world, Bose AR adds an audible layer of information and experiences, making every day better, easier, more meaningful, and more productive.”

The secret sauce seems to be the tiny, “wafer-thin” acoustics package developed for the platform. Bose said it represents the future of mobile micro-sound and features “jaw-dropping power and clarity.”

Bose adds the technology can “be built into headphones, eyewear, helmets and more and it allows simple head gestures, voice, or a tap on the wearable to control content.”

 

Bose is making AR glasses focused on audio, not visuals

Here are some examples Bose gave for how it might be used:

    • For travel, the Bose AR could simulate historic events at landmarks as you view them — “so voices and horses are heard charging in from your left, then passing right in front of you before riding off in the direction of their original route, fading as they go.” You could hear a statue make a famous speech when you approach it. Or get told which way to turn towards your departure gate while checking in at the airport.
    • Bose AR could translate a sign you’re reading. Or tell you the word or phrase for what you’re looking at in any language. Or explain the story behind the painting you’ve just approached.
  • With gesture controls, you could choose or change your music with simple head nods indicating yes, no, or next (Bragi headphones already do this).
  • Bose AR would add useful information based on where you look. Like the forecast when you look up or information about restaurants on the street you look down.

 

 

The 10 Best VR Apps for Classrooms Using Merge VR’s New Merge Cube — from edsurge.com

 

Google Lens arrives on iOS — from techcrunch.com by Sarah Perez

Excerpt:

On the heels of last week’s rollout on Android, Google’s  new AI-powered technology, Google Lens, is now arriving on iOS. The feature is available within the Google Photos iOS application, where it can do things like identify objects, buildings, and landmarks, and tell you more information about them, including helpful details like their phone number, address, or open hours. It can also identify things like books, paintings in museums, plants, and animals. In the case of some objects, it can also take actions.

For example, you can add an event to your calendar from a photo of a flyer or event billboard, or you can snap a photo of a business card to store the person’s phone number or address to your Contacts.

 

The eventual goal is to allow smartphone cameras to understand what it is they’re seeing across any type of photo, then helping you take action on that information, if need be – whether that’s calling a business, saving contact information, or just learning about the world on the other side of the camera.

 

 

15 Top Augmented Reality (AR) Apps Changing Education — from vudream.com by Steven Wesley

 

 

 

CNN VR App Brings News to Oculus Rift — from vrscout.com by Jonathan Nafarrete

 

 

 

 

Embracing Digital Tools of the Millennial Trade. — from virtuallyinspired.org

Excerpt:

Thus, millennials are well-acquainted with – if not highly dependent on – the digital tools they use in their personal and professional lives. Tools that empower them to connect and collaborate in a way that is immediate and efficient, interactive and self-directed. Which is why they expect technology-enhanced education to replicate this user experience in the virtual classroom. And when their expectations fall short or go unmet altogether, millennials are more likely to go in search of other alternatives.

 

 

From DSC:
There are several solid tools mentioned in this article, and I always appreciate the high-level of innovation arising from Susan Aldridge, Marci Powell, and the folks at virtuallyinspired.org.

After reading the article, the key considerations that come to my mind involve the topics of usability and advocating for the students’ perspective. That is, we need to approach things from the student’s/learner’s standpoint — from a usability and user experience standpoint. For example, a seamless/single sign-on for each of these tools would be a requirement for implementing them. Otherwise, learners would have to be constantly logging into a variety of systems and services. Not only is that process time consuming, but a learner would need to keep track of additional passwords — and who doesn’t have enough of those to keep track of these days (I realize there are tools for that, but even those tools require additional time to investigate, setup, and maintain).

So plug-ins for the various CMSs/LMSs are needed that allow for a nice plug-and-play situation here.

 

 
© 2024 | Daniel Christian