Want to learn a new language? With this AR app, just point & tap — from fastcodesign.com by Mark Wilson
A new demo shows how augmented reality could redefine apps as we know them.

Excerpt:

There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.

To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”

 

 

 

Also see:

Apple ARKit & Machine Learning Come Together Making Smarter Augmented Reality — from next.reality.news by Jason Odom

Excerpt:

The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.

Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.

 

 

 

Learn the skills and resources you need to master virtual reality — from vudream.com by Mark Metry

Excerpt:

[From] Tee Jia Hen, CEO of VRcollab
In my opinion, there are 4 specializations of VR content professional.

  1. VR native app development
  2. Cinematic VR creation
  3. Photogrammetry
  4. VR web development

 

 


Also see:

Getting Started with WebVR – from virtualrealitypop.com by Michael Hazani

Excerpt:

This is not a tutorial or a comprehensive, thorough technical guide?—?many of those already exist?—?but rather a way to think about WebVR and acquaint yourself with what it is, exactly, and how best to approach it from scratch. If you’ve been doing WebVR or 3D programming for a while, this article is most certainly not for you. If you’ve been curious about that stuff and want to know how to join the party— read on!

 


 

 

 

Davy Crockett to give tours of Alamo in new augmented reality app — from mysanantonio.com by Samantha Ehlinger

Excerpt:

Using a smart phone, users will be able to see and interact with computer-generated people and scenes from the past — overlayed on top of the very real and present-day Alamo. The app will also show the Alamo as it was at different points in history, and tell the story of the historic battle through different perspectives of the people (like Crockett) who were there. The app includes extra features users can buy, much like Pokémon Go.

“We’re making this into a virtual time machine so that if I’m standing on this spot and I look at, oh well there’s Davy Crockett, then I can go back a century and I can see the mission being built,” Alamo Reality CEO Michael McGar said. The app will allow users to see the Alamo not only as it was in 1836, but as it was before and after, McGar said.

 

 

 

“We’re developing a technology that’s going to be able to span across generations to tell a story”

— Lane Traylor

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

VR Is the Fastest-Growing Skill for Online Freelancers — from bloomberg.com by Isabel Gottlieb
Workers who specialize in artificial intelligence also saw big jumps in demand for their expertise.

Excerpt:

Overall, tech-related skills accounted for nearly two-thirds of Upwork’s list of the 20 fastest-growing skills.

 


 

 


Also see:


How to Prepare Preschoolers for an Automated Economy — from nytimes.com by Claire Miller and Jess Bidgood

Excerpt

MEDFORD, Mass. — Amory Kahan, 7, wanted to know when it would be snack time. Harvey Borisy, 5, complained about a scrape on his elbow. And Declan Lewis, 8, was wondering why the two-wheeled wooden robot he was programming to do the Hokey Pokey wasn’t working. He sighed, “Forward, backward, and it stops.”

Declan tried it again, and this time the robot shook back and forth on the gray rug. “It did it!” he cried. Amanda Sullivan, a camp coordinator and a postdoctoral researcher in early childhood technology, smiled. “They’ve been debugging their Hokey Pokeys,” she said.

The children, at a summer camp last month run by the Developmental Technologies Research Group at Tufts University, were learning typical kid skills: building with blocks, taking turns, persevering through frustration. They were also, researchers say, learning the skills necessary to succeed in an automated economy.

Technological advances have rendered an increasing number of jobs obsolete in the last decade, and researchers say parts of most jobs will eventually be automated. What the labor market will look like when today’s young children are old enough to work is perhaps harder to predict than at any time in recent history. Jobs are likely to be very different, but we don’t know which will still exist, which will be done by machines and which new ones will be created.

 

 

 

Penn State World Campus implements 360-degree videos in online courses — from news.psu.edu by Mike Dawson
Videos give students virtual-reality experiences; leaders hopeful for quick expansion

Excerpt:

UNIVERSITY PARK, Pa. — Penn State World Campus is using 360-degree videos and virtual reality for the first time with the goal of improving the educational experience for online learners.

The technology has been implemented in the curriculum of a graduate-level special education course in Penn State’s summer semester. Students can use a VR headset to watch 360-degree videos on a device such as a smartphone.

The course, Special Education 801, focuses on how teachers can respond to challenging behaviors, and the 360-degree videos place students in a classroom where they see an instructor explaining strategies for arranging the classroom in ways best-suited for the learning activity. The videos were produced using a 360-degree video camera and uploaded into the course in just a few a days.

 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

Google is turning Street View imagery into pro-level landscape photographs using artificial intelligence — from businessinsider.com by Edoardo Maggio

Excerpt:

A new experiment from Google is turning imagery from the company’s Street View service into impressive digital photographs using nothing but artificial intelligence (AI).

Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada’s and California’s national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.

The idea is to “mimic the workflow of a professional photographer,” and to do so Google is relying on so-called generative adversarial networks (GAN), which essentially pit two neural networks against one another.

 

See also:

Using Deep Learning to Create Professional-Level Photographs — from research.googleblog.com by Hui Fang, Software Engineer, Machine Perception

 

 

More Than Just Cool? — from insidehighered.com by Nick Roll
Virtual and augmented realities make headway in courses on health care, art history and social work.

Excerpt:

When Glenn Gunhouse visits the Pantheon, you would think that the professor, who teaches art and architecture history, wouldn’t be able to keep his eyes off the Roman temple’s columns, statues or dome. But there’s something else that always catches his eye: the jaws of the tourists visiting the building, and the way they all inevitably drop.

“Wow.”

There’s only one other way that Gunhouse has been able to replicate that feeling of awe for his students short of booking expensive plane tickets to Italy. Photos, videos and even three-dimensional walk-throughs on a computer screen don’t do it: It’s when his students put on virtual reality headsets loaded with images of the Pantheon.

 

…nursing schools are using virtual reality or augmented reality to bring three-dimensional anatomy illustrations off of two-dimensional textbook pages.

 

 

 



 

Also see:

Oculus reportedly planning $200 standalone wireless VR headset for 2018 — from techcrunch.com by Darrell Etherington

Excerpt:

Facebook is set to reveal a standalone Oculus virtual reality headset sometime later this year, Bloomberg reports, with a ship date of sometime in 2018. The headset will work without requiring a tethered PC or smartphone, according to the report, and will be branded with the Oculus name around the world, except in China, where it’ll carry Xiaomi trade dress and run some Xiaomi software as part of a partnership that extends to manufacturing plans for the device.

 



Facebook Inc. is taking another stab at turning its Oculus Rift virtual reality headset into a mass-market phenomenon. Later this year, the company plans to unveil a cheaper, wireless device that the company is betting will popularize VR the way Apple did the smartphone.

Source



 

 

 
 
© 2017 | Daniel Christian