Want to learn a new language? With this AR app, just point & tap — from fastcodesign.com by Mark Wilson
A new demo shows how augmented reality could redefine apps as we know them.

Excerpt:

There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.

To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”

 

 

 

Also see:

Apple ARKit & Machine Learning Come Together Making Smarter Augmented Reality — from next.reality.news by Jason Odom

Excerpt:

The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.

Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.

 

 

 

Davy Crockett to give tours of Alamo in new augmented reality app — from mysanantonio.com by Samantha Ehlinger

Excerpt:

Using a smart phone, users will be able to see and interact with computer-generated people and scenes from the past — overlayed on top of the very real and present-day Alamo. The app will also show the Alamo as it was at different points in history, and tell the story of the historic battle through different perspectives of the people (like Crockett) who were there. The app includes extra features users can buy, much like Pokémon Go.

“We’re making this into a virtual time machine so that if I’m standing on this spot and I look at, oh well there’s Davy Crockett, then I can go back a century and I can see the mission being built,” Alamo Reality CEO Michael McGar said. The app will allow users to see the Alamo not only as it was in 1836, but as it was before and after, McGar said.

 

 

 

“We’re developing a technology that’s going to be able to span across generations to tell a story”

— Lane Traylor

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

Codify Academy Taps IBM Cloud with Watson to Design Cognitive Chatbot — from finance.yahoo.com
Chatbot “Bobbot” has driven thousands of potential leads, 10 percent increase in converting visitors to students

Excerpt:

ARMONK, N.Y., Aug. 4, 2017 /PRNewswire/ — IBM (NYSE: IBM) today announced that Codify Academy, a San Francisco-based developer education startup, tapped into IBM Cloud’s cognitive services to create an interactive cognitive chatbot, Bobbot, that is improving student experiences and increasing enrollment.

Using the IBM Watson Conversation Service, Bobbot fields questions from prospective and current students in natural language via the company’s website. Since implementing the chatbot, Codify Academy has engaged thousands of potential leads through live conversation between the bot and site visitors, leading to a 10 percent increase in converting these visitors into students.

 

 

Bobbot can answer more than 200 common questions about enrollment, course and program details, tuition, and prerequisites, in turn enabling Codify Academy staff to focus on deeper, more meaningful exchanges.

 

 

 


Also see:

Chatbots — The Beginners Guide
 — from chatbotsmagazine.com

Excerpt:

If you search for chatbots on Google, you’ll probably come across hundreds of pages starting from what is a chatbot to how to build one. This is because we’re in 2017, the year of the chatbots revolution.

I’ve been introduced to many people who are new to this space, and who are very interested and motivated in entering it, rather they’re software developers, entrepreneurs, or just tech hobbyists. Entering this space for the first time, has become overwhelming in just a few months, particularly after Facebook announced the release of the messenger API at F8 developer conference. Due to this matter, I’ve decided to simplify the basic steps of entering this fascinating world.

 


 

 

 

 

VR Is the Fastest-Growing Skill for Online Freelancers — from bloomberg.com by Isabel Gottlieb
Workers who specialize in artificial intelligence also saw big jumps in demand for their expertise.

Excerpt:

Overall, tech-related skills accounted for nearly two-thirds of Upwork’s list of the 20 fastest-growing skills.

 


 

 


Also see:


How to Prepare Preschoolers for an Automated Economy — from nytimes.com by Claire Miller and Jess Bidgood

Excerpt

MEDFORD, Mass. — Amory Kahan, 7, wanted to know when it would be snack time. Harvey Borisy, 5, complained about a scrape on his elbow. And Declan Lewis, 8, was wondering why the two-wheeled wooden robot he was programming to do the Hokey Pokey wasn’t working. He sighed, “Forward, backward, and it stops.”

Declan tried it again, and this time the robot shook back and forth on the gray rug. “It did it!” he cried. Amanda Sullivan, a camp coordinator and a postdoctoral researcher in early childhood technology, smiled. “They’ve been debugging their Hokey Pokeys,” she said.

The children, at a summer camp last month run by the Developmental Technologies Research Group at Tufts University, were learning typical kid skills: building with blocks, taking turns, persevering through frustration. They were also, researchers say, learning the skills necessary to succeed in an automated economy.

Technological advances have rendered an increasing number of jobs obsolete in the last decade, and researchers say parts of most jobs will eventually be automated. What the labor market will look like when today’s young children are old enough to work is perhaps harder to predict than at any time in recent history. Jobs are likely to be very different, but we don’t know which will still exist, which will be done by machines and which new ones will be created.

 

 

 

155 chatbots in this brand new landscape. Where does your bot fit? — from venturebeat.com by Carylyne Chan

Excerpt:

Since we started building bots at KeyReply more than two years ago, the industry has seen massive interest and change. This makes it hard for companies and customers to figure out what’s really happening — so we hope to throw some light on this industry by creating a landscape of chatbot-related businesses. There’s no way to put everyone into this landscape, so we have selected examples that give readers an overview of the industry, such as notable or dominant providers and tools widely used to develop bots.

To put everything into a coherent structure, we arranged companies along the axes according to the functions of their bots and how they built them.

On the horizontal axis, the “marketing” function refers to a bot’s ability to drive exposure, reach, and interaction with the brand or product for potential and current customers. The “support” function refers to a bot’s ability to assist current customers with problems and to resolve those problems for them.

On the vertical axis, “managed” refers to companies outsourcing the development of bots to external vendors, whereas “self-serve” refers to them building their bots in-house or with an off-the-shelf tool.

 

 

 

 

 

Penn State World Campus implements 360-degree videos in online courses — from news.psu.edu by Mike Dawson
Videos give students virtual-reality experiences; leaders hopeful for quick expansion

Excerpt:

UNIVERSITY PARK, Pa. — Penn State World Campus is using 360-degree videos and virtual reality for the first time with the goal of improving the educational experience for online learners.

The technology has been implemented in the curriculum of a graduate-level special education course in Penn State’s summer semester. Students can use a VR headset to watch 360-degree videos on a device such as a smartphone.

The course, Special Education 801, focuses on how teachers can respond to challenging behaviors, and the 360-degree videos place students in a classroom where they see an instructor explaining strategies for arranging the classroom in ways best-suited for the learning activity. The videos were produced using a 360-degree video camera and uploaded into the course in just a few a days.

 

 

 
 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

 

2017 Ed Tech Trends: The Halfway Point — from campustechnology.com by Rhea Kelly
Four higher ed IT leaders weigh in on the current state of education technology and what’s ahead.

This article includes some perspectives shared from the following 4 IT leaders:

  • Susan Aldridge, Senior Vice President for Online Learning, Drexel University (PA); President, Drexel University Online
  • Daniel Christian, Adjunct Faculty Member, Calvin College
  • Marci Powell, CEO/President, Marci Powell & Associates; Chair Emerita and Past President, United States Distance Learning Association
  • Phil Ventimiglia, Chief Innovation Officer, Georgia State University

 

 

Also see:

 

 

 
© 2017 | Daniel Christian