Most of us are at least vaguely familiar with augmented reality thanks to Pokemon Go sticking Charmanders in our sock drawers and Snapchat letting us spew stars and rainbows from our mouths, but Apple’s iPhone 8 is going to push AR to point of ubiquity. When the iPhone 8 launches, we’ll all be seeing the world differently.
iPhones are everywhere, so AR will be everywhere!
The iPhone 8 will bring with it iOS 11, and with iOS 11 will come Apple’s AR. Since the iPhone 8 is all but guaranteed to be a best-seller and earlier iPhones and iPads will be widely updated to iOS 11, Apple will have a massive AR platform. Craig Federighi, Apple’s Senior Vice President of Software Engineering, believes it will be “the largest AR platform in the world,” which will lure AR developers en masse.
Apple AR has huge potential in education. Apple has been positioning itself in the education world for years, with programs like iTunes U and iBook, as well as efforts to get iPads into classrooms. AR already has major prospects in education, with the ability to make Museum exhibits interactive and to put a visually explorable world in front of users.
[India] The widening gap between the skills required by businesses and the know-how of a large number of engineering students got Raman Talwar started on his entrepreneurial journey.
…
Delhi-based Simulanis harnesses AR and VR technology to help companies across industries— pharmaceuticals, auto, FMCG and manufacturing—train their staff. It continues to work in the engineering education sector and has developed applications that assist students visualise challenging subjects and concepts.
Our products help students and trainees learn difficult concepts easily and interactively through immersive AR-VR and 3-D gamification methods,” says Talwar. Simulanis’ offerings include an AR learning platform, Saral, and a gamified learning platform, Protocol.
There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.
To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”
The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.
Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.
This is not a tutorial or a comprehensive, thorough technical guide?—?many of those already exist?—?but rather a way to think about WebVR and acquaint yourself with what it is, exactly, and how best to approach it from scratch. If you’ve been doing WebVR or 3D programming for a while, this article is most certainly not for you. If you’ve been curious about that stuff and want to know how to join the party— read on!
Using a smart phone, users will be able to see and interact with computer-generated people and scenes from the past — overlayed on top of the very real and present-day Alamo. The app will also show the Alamo as it was at different points in history, and tell the story of the historic battle through different perspectives of the people (like Crockett) who were there. The app includes extra features users can buy, much like Pokémon Go.
“We’re making this into a virtual time machine so that if I’m standing on this spot and I look at, oh well there’s Davy Crockett, then I can go back a century and I can see the mission being built,” Alamo Reality CEO Michael McGar said. The app will allow users to see the Alamo not only as it was in 1836, but as it was before and after, McGar said.
“We’re developing a technology that’s going to be able to span across generations to tell a story”
Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.
When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.
Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.
MEDFORD, Mass. — Amory Kahan, 7, wanted to know when it would be snack time. Harvey Borisy, 5, complained about a scrape on his elbow. And Declan Lewis, 8, was wondering why the two-wheeled wooden robot he was programming to do the Hokey Pokey wasn’t working. He sighed, “Forward, backward, and it stops.”
Declan tried it again, and this time the robot shook back and forth on the gray rug. “It did it!” he cried. Amanda Sullivan, a camp coordinator and a postdoctoral researcher in early childhood technology, smiled. “They’ve been debugging their Hokey Pokeys,” she said.
The children, at a summer camp last month run by the Developmental Technologies Research Group at Tufts University, were learning typical kid skills: building with blocks, taking turns, persevering through frustration. They were also, researchers say, learning the skills necessary to succeed in an automated economy.
Technological advances have rendered an increasing number of jobs obsolete in the last decade, and researchers say parts of most jobs will eventually be automated. What the labor market will look like when today’s young children are old enough to work is perhaps harder to predict than at any time in recent history. Jobs are likely to be very different, but we don’t know which will still exist, which will be done by machines and which new ones will be created.
UNIVERSITY PARK, Pa. — Penn State World Campus is using 360-degree videos and virtual reality for the first time with the goal of improving the educational experience for online learners.
The technology has been implemented in the curriculum of a graduate-level special education course in Penn State’s summer semester. Students can use a VR headset to watch 360-degree videos on a device such as a smartphone.
The course, Special Education 801, focuses on how teachers can respond to challenging behaviors, and the 360-degree videos place students in a classroom where they see an instructor explaining strategies for arranging the classroom in ways best-suited for the learning activity. The videos were produced using a 360-degree video camera and uploaded into the course in just a few a days.
In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.
SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.
Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.
The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.
The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.
A new experiment from Google is turning imagery from the company’s Street View service into impressive digital photographs using nothing but artificial intelligence (AI).
Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada’s and California’s national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.
The idea is to “mimic the workflow of a professional photographer,” and to do so Google is relying on so-called generative adversarial networks (GAN), which essentially pit two neural networks against one another.
More Than Just Cool?— from insidehighered.com by Nick Roll Virtual and augmented realities make headway in courses on health care, art history and social work.
Excerpt:
When Glenn Gunhouse visits the Pantheon, you would think that the professor, who teaches art and architecture history, wouldn’t be able to keep his eyes off the Roman temple’s columns, statues or dome. But there’s something else that always catches his eye: the jaws of the tourists visiting the building, and the way they all inevitably drop.
“Wow.”
There’s only one other way that Gunhouse has been able to replicate that feeling of awe for his students short of booking expensive plane tickets to Italy. Photos, videos and even three-dimensional walk-throughs on a computer screen don’t do it: It’s when his students put on virtual reality headsets loaded with images of the Pantheon.
…nursing schools are using virtual reality or augmented reality to bring three-dimensional anatomy illustrations off of two-dimensional textbook pages.
Facebook is set to reveal a standalone Oculus virtual reality headset sometime later this year, Bloomberg reports, with a ship date of sometime in 2018. The headset will work without requiring a tethered PC or smartphone, according to the report, and will be branded with the Oculus name around the world, except in China, where it’ll carry Xiaomi trade dress and run some Xiaomi software as part of a partnership that extends to manufacturing plans for the device.
Facebook Inc. is taking another stab at turning its Oculus Rift virtual reality headset into a mass-market phenomenon. Later this year, the company plans to unveil a cheaper, wireless device that the company is betting will popularize VR the way Apple did the smartphone.
Grush: Then what are some of the implications you could draw from metrics like that one?
Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”
While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.
Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves.This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilledto exploit AI rather than compete with it…”
A side note from DSC: I began working on this vision prior to 2010…but I didn’t officially document it until 2012.
Learning from the Living [Class] Room:
A global, powerful, next generation learning platform
What does the vision entail?
A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
(Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
(Potentially) Direct access to popular job search sites
(Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
(Potentially) Integration with one-on-one tutoring services
Addendum from DSC (regarding the resource mentioned below): Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”
The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.
Holographic storytelling— from jwtintelligence.com The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies. New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book. Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USCInstitute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.
Apple’s entry into augmented reality is gathering pace at an amazing rate, says one of its vice-presidents visiting Australia.
In an interview with The Australian yesterday, Apple vice-president of product marketing Greg “Joz” Joswiak said the enthusiasm of Apple’s development community building augmented reality (AR) applications had been “unbelievable”.
“They’ve built everything from virtual tape measures (to) ballerinas made out of wood dancing on floors. It’s absolutely incredible what people are doing in so little time.”
He said in the commercial space, AR applications would evolve for shopping, furniture placement, education, training and services.
Excerpt: Imagine being surrounded by a world of ghosts, things that aren’t there unless you look hard enough, and in the right way. With augmented reality technology, that’s possible—and museums are using it to their advantage. With augmented reality, museums are superimposing ther virtual world right over what’s actually in front of you, bringing exhibits and artifacts to life in new ways.
These five spots are great examples of how augmented reality is enhancing the museum experience.
Top trends from InfoComm 2017 — from inavateonthenet.net AV over IP and huddle rooms are two key takeaways from InfoComm as Paul Milligan wraps up the 2017 show.
Learning from the Living [Class] Room: A vision for a global, powerful, next generation learning platform
By Daniel Christian
NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.
I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.
Learning from the Living [Class] Room:
A global, powerful, next generation learning platform
What does the vision entail?
A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
(Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
(Potentially) Direct access to popular job search sites
(Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:
A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
A need to display multiple things going on at once, such as:
The SME(s)
An application or multiple applications that the SME(s) are using
Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
The ability to annotate on top of the application(s) and point to things w/in the app(s)
Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)
This new learning platform will also feature:
Voice-based commands to drive the system (via Natural Language Processing (NLP))
Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
Chatbots
For learning how to use the system
For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
For asking questions within a course
As many profiles as needed per household
(Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
(Optional) Voice recognition to efficiently launch the desired profile
(Optional) Facial recognition to efficiently launch the desired profile
(Optional) Upon system launch, to immediately return to where the learner previously left off
The capability of the webcam to recognize objects and bring up relevant resources for that object
A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”
In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.
Likely players:
Amazon – personal assistance via Alexa
Apple – personal assistance via Siri
Google – personal assistance via Google Assistant; language translation
Facebook — personal assistance via M
Microsoft – personal assistance via Cortana; language translation
IBM Watson – cognitive computing; language translation