Amazon and Codecademy team up for free Alexa skills training — from venturebeat.com by Khari Johnson

Excerpt:

Amazon and tech training app Codecademy have collaborated to create a series of free courses. Available today, the courses are meant to train developers as well as beginners how to create skills, the voice apps that interact with Alexa.

Since opening Alexa to third-party developers in 2015, more than 20,000 skills have been made available in the Alexa Skills Store.

 

 

 

 

Google AR and VR: Get a closer look with Street View in Google Earth VR

Excerpt:

With Google Earth VR, you can go anywhere in virtual reality. Whether you want to stroll along the canals of Venice, stand at the summit of Mount Kilimanjaro or soar through the sky faster than a speeding bullet, there’s no shortage of things to do or ways to explore. We love this sense of possibility, so we’re bringing Street View to Earth VR to make it easier for you to see and experience the world.

This update lets you explore Street View imagery from 85 countries right within Earth VR. Just fly down closer to street level, check your controller to see if Street View is available and enter an immersive 360° photo. You’ll find photos from the Street View team and those shared by people all around the world.

 

 

 

 

 

 

The End of Typing: The Next Billion Mobile Users Will Rely on Video and Voice — from wsj.com by Eric Bellman
Tech companies are rethinking products for the developing world, creating new winners and losers

Excerpt:

The internet’s global expansion is entering a new phase, and it looks decidedly unlike the last one.

Instead of typing searches and emails, a wave of newcomers—“the next billion,” the tech industry calls them—is avoiding text, using voice activation and communicating with images.

 

 

From DSC:
The above article reminds me that our future learning platforms will be largely driven by our voices. That’s why I put it into my vision of a next generation learning platform.

 

 

 

 

 

 

Amazon’s Alexa is finally coming to wearable devices for the first time — from yahoo.com by Peter Newman

Excerpt:

Manufacturers have thus far incorporated the voice assistant into speakers, phones, thermostats, and more — but being incorporated into a wearable device is a first for Alexa.

The headphones will let users tap a button to launch the voice assistant, which will connect to the device through the user’s mobile phone and the Bragi app. They will let a wearer engage with the voice assistant while on the go, searching for basic information, shopping for goods on Amazon, or calling for a vehicle from ride-hailing services like Uber, among other possibilities. While all of these capabilities are already possible using a phone, enabling hands-free voice control brings a new level of convenience.

 

 


 

 

 

ARCore: Augmented reality at Android scale — from blog.google by Dave Burke

Excerpt:

With more than two billion active devices, Android is the largest mobile platform in the world. And for the past nine years, we’ve worked to create a rich set of tools, frameworks and APIs that deliver developers’ creations to people everywhere. Today, we’re releasing a preview of a new software development kit (SDK) called ARCore. It brings augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

 

Google just announced its plan to match the coolest new feature coming to the iPhone –from cnbc.com by Todd Haselton

  • Google just announced its answer to Apple’s augmented reality platform
  • New tools called ARCore will let developers enable AR on millions of Android devices

 

AR Experiments

Description:

AR Experiments is a site that features work by coders who are experimenting with augmented reality in exciting ways. These experiments use various tools like ARCore, an SDK that lets Android developers create awesome AR experiences. We’re featuring some of our favorite projects here to help inspire more coders to imagine what could be made with AR.

 

Google’s ARCore hopes to introduce augmented reality to the Android masses — from androidauthority.com by Williams Pelegrin

Excerpt:

Available as a preview, ARCore is an Android software development kit (SDK) that lets developers introduce AR capabilities to, you guessed it, Android devices. Because of how ARCore works, there is no need for folks to purchase additional sensors or hardware – it will work on existing and future Android phones.

 

 

Artificial intelligence will transform universities. Here’s how. — from weforum.org by Mark Dodgson & David Gann

Excerpt:

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.

We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.

Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.

As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.

The implications of AI for university research extend beyond science and technology.

When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.

Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.

 

 

Also see:

Digital audio assistants in teaching and learning — from blog.blackboard.com by Szymon Machajewski

Excerpts:

I built an Amazon Alexa skill called Introduction to Computing Flashcards. In using the skill, or Amazon Alexa app, students are able to listen to Alexa and then answer questions. Alexa helps students prepare for an exam by speaking definitions and then waiting for their identification. In addition to quizzing the student, Alexa is also keeping track of the correct answers. If a student answers five questions correctly, Alexa shares a game code, which is worth class experience points in the course gamification My Game app.

Certainly, exam preparation apps are one way to use digital assistants in education. As development and publishing of Amazon Alexa skills becomes easier, faculty will be able to produce such skills just as easily as they now create PowerPoints. Given the basic code available through Amazon tutorials, it takes 20 minutes to create a new exam preparation app. Basic voice experience Amazon Alexa skills can take as much as five minutes to complete.

Universities can publish their campus news through the Alexa Flash Briefing. This type of a skill can publish news, success stories, and other events associated with the campus.

If you are a faculty member, how can you develop your first Amazon Alexa skill? You can use any of the tutorials already available. You can also participate in an Amazon Alexa classroom training provided by Alexa Dev Days. It is possible that schools or maker spaces near you offer in-person developer sessions. You can use meetup.com to track these opportunities.

 

 

 

 

 

Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 
 

From DSC:
There are now more than 12,000+ skills on Amazon’s new platform — Alexa.  I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?


 

From this Alexa Skills Kit page:

What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.

What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

You can build and host most skills for free using Amazon Web Services (AWS).

 

 

 


 

 
© 2017 | Daniel Christian