Also see:



 

Everything Apple Announced — from wired.comby Arielle Pardes

Excerpt:

To much fanfare, Apple CEO Tim Cook unveiled the next crop of iPhones [on 9/12/17] at the new Steve Jobs Theater on Apple’s new headquarters in Cupertino. With the introduction of three new phones, Cook made clear that Apple’s premiere product is very much still evolving. The iPhone X, he said, represents “the future of smartphones”: a platform for augmented reality, a tool for powerful computing, screen for everything. But it’s not all about the iPhones. The event also brought with it a brand new Apple Watch, upgrades to Apple TV, and a host of other features coming to the Apple ecosystem this fall. Missed the big show? Check out our archived live coverage of Apple’s big bash, and read all the highlights below.

 

 

iPhone Event 2017 — from techcrunch.com

From DSC:
A nice listing of articles that cover all of the announcements.

 

 

Apple Bets on Augmented Reality to Sell Its Most Expensive Phone — from bloomberg.com by Alex Webb and Mark Gurman

Excerpt:

Apple Inc. packed its $1,000 iPhone with augmented reality features, betting the nascent technology will persuade consumers to pay premium prices for its products even as cheaper alternatives abound.

The iPhone X, Apple’s most expensive phone ever, was one of three new models Chief Executive Officer Tim Cook showed off during an event at the company’s new $5 billion headquarters in Cupertino, California, on Tuesday. It also rolled out an updated Apple Watch with a cellular connection and an Apple TV set-top box that supports higher-definition video.

Augmented Reality
Apple executives spent much of Tuesday’s event describing how AR is at the core of the new flagship iPhone X. Its new screen, 3-D sensors, and dual cameras are designed for AR video games and other more-practical uses such as measuring digital objects in real world spaces. Months before the launch, Apple released a tool called ARKit that made it easier for developers to add AR capabilities to their apps.

These technologies have never been available in consumer devices and “solidify the platform on which Apple will retain and grow its user base for the next decade,” Gene Munster of Loup Ventures wrote in a note following Apple’s event.

The company is also working on smart glasses that may be AR-enabled, people familiar with the plan told Bloomberg earlier this year.

 

 

Meet the iPhone X, Apple’s New High-End Handset — from wired.com by David Pierce

Excerpt:

First of all, the X looks like no other phone. It doesn’t even look like an iPhone. On the front, it’s screen head to foot, save for a small trapezoidal notch taken out of the top where Apple put selfie cameras and sensors. Otherwise, the bezel around the edge of the phone has been whittled to near-nonexistence and the home button disappeared—all screen and nothing else. The case is made of glass and stainless steel, like the much-loved iPhone 4. The notched screen might take some getting used to, but the phone’s a stunner. It goes on sale starting at $999 on October 27, and it ships November 3.

If you can’t get your hands on an iPhone X in the near future, Apple still has two new models for you. The iPhone 8 and 8 Plus both look like the iPhone 7—with home buttons!—but offer a few big upgrades to match the iPhone X. Both new models support wireless charging, run the latest A11 Bionic processor, and have 2 gigs of RAM. They also have glass backs, which gives them a glossy new look. They don’t have OLED screens, but they’re getting the same TrueTone tech as the X, and they can shoot video in 4K.

 

 

Apple Debuts the Series 3 Apple Watch, Now With Cellular — from wired.com by David Pierce

 

 

 

Ikea and Apple team up on augmented reality home design app — from curbed.com by Asad Syrkett
The ‘Ikea Place’ app lets shoppers virtually test drive furniture

 

 

The New Apple iPhone 8 Is Built for Photography and Augmented Reality — from time.com by Alex Fitzpatrick

Excerpt:

Apple says the new iPhones are also optimized for augmented reality, or AR, which is software that makes it appear that digital images exist in the user’s real-world environment. Apple SVP Phil Schiller demonstrated several apps making use of AR technology, from a baseball app that shows users player statistics when pointing their phone at the field to a stargazing app that displays the location of constellations and other celestial objects in the night sky. Gaming will be a major use case for AR as well.

Apple’s new iPhone 8 and iPhone 8 plus will have wireless charging as well. Users will be able to charge the device by laying it down on a specially-designed power mat on their desk, bedside table or inside their car, similar to how the Apple Watch charges. (Competing Android devices have long had a similar feature.) Apple is using the Qi wireless charging standard for the iPhones.

 

 

Why you shouldn’t unlock your phone with your face — from medium.com by Quincy Larson

Excerpt:

Today Apple announced its new FaceID technology. It’s a new way to unlock your phone through facial recognition. All you have to do is look at your phone and it will recognize you and unlock itself. At time of writing, nobody outside of Apple has tested the security of FaceID. So this article is about the security of facial recognition, and other forms of biometric identification in general.

Historically, biometric identification has been insecure. Cameras can be tricked. Voices can be recorded. Fingerprints can be lifted. And in many countries?—?including the US?—?the police can legally force you to use your fingerprint to unlock your phone. So they can most certainly point your phone at your face and unlock it against your will. If you value the security of your data?—?your email, social media accounts, family photos, the history of every place you’ve ever been with your phone?—?then I recommend against using biometric identification.

Instead, use a passcode to unlock your phone.

 

 

The iPhone lineup just got really compleX  — from techcrunch.com by Josh Constine

 

 

 

 

 

Apple’s ‘Neural Engine’ Infuses the iPhone With AI Smarts — from wired.com by Tom Simonite

Excerpt:

When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would “set the path for technology for the next decade.” Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors.

That feature is the “neural engine,” part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.

Apple said the neural engine would power the algorithms that recognize your face to unlock the phone and transfer your facial expressions onto animated emoji. It also said the new silicon could enable unspecified “other features.”

Chip experts say the neural engine could become central to the future of the iPhone as Apple moves more deeply into areas such as augmented reality and image recognition, which rely on machine-learning algorithms. They predict that Google, Samsung, and other leading mobile-tech companies will soon create neural engines of their own. Earlier this month, China’s Huawei announced a new mobile chip with a dedicated “neural processing unit” to accelerate machine learning.

 

 

 

 

Samsung to develop VR mental health diagnosis tools for hospitals — from by Cho Mu-Hyun
Samsung Electronics will work with Gangnam Severance Hospital and content maker FNI to develop mental health diagnosis tools that use virtual reality.

Excerpt:

Cognitive behaviour therapies for suicide prevention and psychological assessment will be the focus, it said.

The companies will make chairs and diagnosis kits as physical products and will develop an application for use in psychological assessments using artificial intelligence (AI).

 

 

 

Augmented reality 101: Top AR use-cases — from wikitude.com by Camila Kohles

Excerpt:

Before we proceed, let’s make one thing clear: AR is not just about dog face filters and Pokémon GO. We kid you not. People are using this technology to bring ease to their lives and many forward-thinking companies are working with augmented reality to improve their workflow and businesses. Let’s see how.

 

 

 

The Top 9 Augmented Reality Companies in Healthcare — from medicalfuturist.com with thanks to Woontack Woo for the resource

Excerpt:

When Pokemon Go conquered the world, everyone could face the huge potential in augmented reality. Although the hype around the virtual animal hunting settled, AR continues to march triumphantly into more and more industries and fields, including healthcare. Here, I listed the most significant companies bringing augmented reality to medicine and healing.


Brain Power
The Massachusetts-based technology company, established in 2013, has been focusing on the application of pioneering neuroscience with the latest in wearable technology, in particular, Google Glass. The start-up builds brain science-driven software to transform wearables into neuro-assistive devices for the educational challenges of autism. Their aim is to teach life skills to children and adults on the autism spectrum. They developed a unique software suite, the “Empowered Brain” aiming to help children with their social skills, language, and positive behaviors. The software contains powerful data collection and analytic tools allowing for customized feedback for the child.

 

 

How VR Can Ease the Transition for First-Time Wheelchair Users — from vrscout.com by Presley West
Designers at innovation and design company, Fjord, have created a VR experience that teaches brand new wheelchair users how to safely maneuver their environment in a safe, empowering way.

 

 

 

How Eye Tracking is Driving the Next Generation of AR and VR — from vrscout.com by Eric Kuerzel

 

 

WebVR: Taking The Path Of Least Resistance To Mainstream VR — from vrscout.com by Vanessa Radd

Excerpt:

Content and Education
In the midst of a dearth of content for VR, WebVR content creators are coming together to create and collaborate. Over a million creators are sharing their 3D models on Sketchfab’s 3D/VR art community platform. Virtuleap also organized the first global WebVR hackathon.

“For application domains such as education and heritage, developing VR scenes and experiences for the Web is highly important,” said Stone. “[This] promotes accessibility by many beneficiaries without the need (necessarily) for expensive and sophisticated computing or human interface hardware.”

This democratized approach opens up possibilities in education far beyond what we’re seeing today.

“I also think that WebVR, as a JavaScript API, also enables a wide range of future students and young developers to ‘dip their toes into the water’ and start to build portfolios demonstrating their capabilities, ultimately to future employers,” said Stone. “I remember the promise of the days of VRML and products such as SGI’s Cosmo and Cortona3D (which still available today, of course). But the ability to be able to open up interactive—and quite impressive—demos of VR experiences that existed in higher quality form on specialised platforms, became an amazing marketing tool in the late 1990s and 2000s.”

 

 

 

 

 

ARCore: Augmented reality at Android scale — from blog.google by Dave Burke

Excerpt:

With more than two billion active devices, Android is the largest mobile platform in the world. And for the past nine years, we’ve worked to create a rich set of tools, frameworks and APIs that deliver developers’ creations to people everywhere. Today, we’re releasing a preview of a new software development kit (SDK) called ARCore. It brings augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

 

Google just announced its plan to match the coolest new feature coming to the iPhone –from cnbc.com by Todd Haselton

  • Google just announced its answer to Apple’s augmented reality platform
  • New tools called ARCore will let developers enable AR on millions of Android devices

 

AR Experiments

Description:

AR Experiments is a site that features work by coders who are experimenting with augmented reality in exciting ways. These experiments use various tools like ARCore, an SDK that lets Android developers create awesome AR experiences. We’re featuring some of our favorite projects here to help inspire more coders to imagine what could be made with AR.

 

Google’s ARCore hopes to introduce augmented reality to the Android masses — from androidauthority.com by Williams Pelegrin

Excerpt:

Available as a preview, ARCore is an Android software development kit (SDK) that lets developers introduce AR capabilities to, you guessed it, Android devices. Because of how ARCore works, there is no need for folks to purchase additional sensors or hardware – it will work on existing and future Android phones.

 

 

5 reasons iPhone 8 will be an augmented reality game changer — from techradar.com by Mark Knapp
Augmented Reality (AR) will be everywhere!

Excerpt (emphasis DSC):

Most of us are at least vaguely familiar with augmented reality thanks to Pokemon Go sticking Charmanders in our sock drawers and Snapchat letting us spew stars and rainbows from our mouths, but Apple’s iPhone 8 is going to push AR to point of ubiquity. When the iPhone 8 launches, we’ll all be seeing the world differently.

iPhones are everywhere, so AR will be everywhere!

The iPhone 8 will bring with it iOS 11, and with iOS 11 will come Apple’s AR. Since the iPhone 8 is all but guaranteed to be a best-seller and earlier iPhones and iPads will be widely updated to iOS 11, Apple will have a massive AR platform. Craig Federighi, Apple’s Senior Vice President of Software Engineering, believes it will be “the largest AR platform in the world,” which will lure AR developers en masse.

 

Apple AR has huge potential in education.
Apple has been positioning itself in the education world for years, with programs like iTunes U and iBook, as well as efforts to get iPads into classrooms. AR already has major prospects in education, with the ability to make Museum exhibits interactive and to put a visually explorable world in front of users.

 

 

Apple could guide you around your city using augmented reality — from techcrunch.com by Romain Dillet, with thanks to Woontack Woo for this resource

 

 

 

Startup Simulanis uses augmented and virtual reality to skill professionals — from economictimes.indiatimes.com by Vinay Dwivedi

Excerpt:

[India] The widening gap between the skills required by businesses and the know-how of a large number of engineering students got Raman Talwar started on his entrepreneurial journey.

Delhi-based Simulanis harnesses AR and VR technology to help companies across industries— pharmaceuticals, auto, FMCG and manufacturing—train their staff. It continues to work in the engineering education sector and has developed applications that assist students visualise challenging subjects and concepts.

Our products help students and trainees learn difficult concepts easily and interactively through immersive AR-VR and 3-D gamification methods,” says Talwar. Simulanis’ offerings include an AR learning platform, Saral, and a gamified learning platform, Protocol.

 

Also see:

 

 

 

Want to learn a new language? With this AR app, just point & tap — from fastcodesign.com by Mark Wilson
A new demo shows how augmented reality could redefine apps as we know them.

Excerpt:

There’s a new app gold rush. After Facebook and Apple both released augmented reality development kits in recent months, developers are demonstrating just what they can do with these new technologies. It’s a race to invent the future first.

To get a taste of how quickly and dramatically our smartphone apps are about to change, just take a look at this little demo by front end engineer Frances Ng, featured on Prosthetic Knowledge. Just by aiming her iPhone at various objects and tapping, she can both identify items like lamps and laptops, and translate their names to a number of different languages. Bye bye, multilingual dictionaries and Google translate. Hello, “what the heck is the Korean word for that?”

 

 

 

Also see:

Apple ARKit & Machine Learning Come Together Making Smarter Augmented Reality — from next.reality.news by Jason Odom

Excerpt:

The world is a massive place, especially when you consider the field of view of your smartglasses or mobile device. To fulfill the potential promise of augmented reality, we must find a way to fill that view with useful and contextual information. Of course, the job of creating contextual, valuable information, to fill the massive space that is the planet earth, is a daunting task to take on. Machine learning seems to be one solution many are moving toward.

Tokyo, Japan based web developer, Frances Ng released a video on Twitter showing off her first experiments with Apple’s ARKit and CoreML, Apple’s machine learning system. As you can see in the gifs below, her mobile device is being used to recognize a few objects around her room, and then display the name of the identified objects.

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

7 years after Steve Jobs waged war on Flash, it’s officially dying – from finance.yahoo.com by Kif Leswing

Excerpt:

Adobe is killing Flash, the software that millions used in the early 2000s to play web games and watch video in their web browsers.

The company announced the software was “end-of-life” in a blog post on Tuesday. From the blog post:

“Given this progress, and in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash. Specifically, we will stop updating and distributing the Flash Player at the end of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.”

 

 

The Business of Artificial Intelligence — from hbr.org by Erik Brynjolfsson & Andrew McAfee

Excerpts (emphasis DSC):

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

The machine learns from examples, rather than being explicitly programmed for a particular outcome.

 

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. …For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards. 

 

 

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. 

 

 

You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names.

 

 

 

Now Everyone Really Can Code Thanks to Apple’s New College Course — from trendintech.com by Amanda Porter

 
© 2017 | Daniel Christian