Augmented Reality Technology: A student creates the closest thing yet to a magic ring — from forbes.com by Kevin Murnane

Excerpt:

Nat Martin set himself the problem of designing a control mechanism that can be used unobtrusively to meld AR displays with the user’s real-world environment. His solution was a controller in the shape of a ring that can be worn on the user’s finger. He calls it Scroll. It uses the ARKit software platform and contains an Arduino circuit board, a capacitive sensor, gyroscope, accelerometer, and a Softpot potentiometer. Scroll works with any AR device that supports the Unity game engine such as Google Cardboard or Microsoft’s Hololens.

 

Also see:

Scroll from Nat on Vimeo.

 

 


Addendum on 8/15/17:

New iOS 11 ARKit Demo Shows Off Drawing With Fingers In Augmented Reality [Video] — from redmondpie.com by Oliver Haslam |

Excerpt:

When Apple releases iOS 11 to the public next month, it will also release ARKit for the first time. The framework, designed to make bringing augmented reality to iOS a reality was first debuted during the opening keynote of WWDC 2017 when Apple announced iOS 11, and ever since then we have been seeing new concepts and demos be released by developers.

Those developers have given us a glimpse of what we can expect when apps taking advantage of ARKit start to ship alongside iOS 11, and the latest of those is a demonstration in which someone’s finger is used to draw on a notepad.

 


 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

 

2017 Ed Tech Trends: The Halfway Point — from campustechnology.com by Rhea Kelly
Four higher ed IT leaders weigh in on the current state of education technology and what’s ahead.

This article includes some perspectives shared from the following 4 IT leaders:

  • Susan Aldridge, Senior Vice President for Online Learning, Drexel University (PA); President, Drexel University Online
  • Daniel Christian, Adjunct Faculty Member, Calvin College
  • Marci Powell, CEO/President, Marci Powell & Associates; Chair Emerita and Past President, United States Distance Learning Association
  • Phil Ventimiglia, Chief Innovation Officer, Georgia State University

 

 

Also see:

 

 

 

Making the future work for everyone — from blog.google by Jacquelline Fuller

Excerpt:

Help ensure training is as effective and as wide-reaching as possible.
Millions are spent each year on work skills and technical training programs, but there isn’t much visibility into how these programs compare, or if the skills being taught truly match what will be needed in the future. So some of our funding will go into research to better understand which trainings will be most effective in getting the most people the jobs of the future. Our grantee Social Finance is looking at which youth training programs most effectively use contributions from trainees, governments and future employers to give people the best chance of
success.

 

Helping prepare for the future of work

Excerpt (emphasis DSC):

The way we work is changing. As new technologies continue to unfold in the workplace, more than a third of jobs are likely to require skills that are uncommon in today’s workforce. Workers are increasingly working independently. Demographic changes and shifts in labor participation in developed countries will mean future generations will find new ways to sustain economic growth. These changes create opportunities to think about how work can continue to be a source of not just income, but purpose and meaning for individuals and communities.Technology can help seize these opportunities. We recently launched Google for Jobs, which is designed to help better connect people to jobs, and today we’re announcing Google.org’s $50 million commitment to help people prepare for the changing nature of work. We’ll support nonprofits who are taking innovative approaches to tackling this challenge in three ways: (1) training people with the skills they need, (2) connecting job-seekers with positions that match their skills and talents, and (3) supporting workers in low-wage employment. We’ll start by focusing on the US, Canada, Europe, and Australia, and hope to expand to other countries over time.

 

 

 

 

Campus Technology 2017: Virtual Reality Is More Than a New Medium — from edtechmagazine.com by Amy Burroughs
Experts weigh in on the future of VR in higher education.

Excerpts:

“It’s actually getting pretty exciting,” Georgieva said, noting that legacy companies and startups alike have projects in the works that will soon be on the market. Look for standalone, wireless VR headsets later this year from Facebook and Google.

“I think it’s going to be a universal device,” he said. “Eventually, we’ll end up with some kind of glasses where we can just dial in the level of immersion that we want.”

— Per Emery Craig, at Campus Technology 2017 Conference


“Doing VR for the sake of VR makes no sense whatsoever,” Craig said. “Ask when does it make sense to do this in VR? Does a sense of presence help this, or is it better suited to traditional media?”

 

 

Virtual Reality: The User Experience of Story — from blogs.adobe.com

Excerpt:

Solving the content problems in VR requires new skills that are only just starting to be developed and understood, skills that are quite different from traditional storytelling. VR is a nascent medium. One part story, one part experience. And while many of the concepts from film and theater can be used, storytelling through VR is not like making a movie or a play.

In VR, the user has to be guided through an experience of a story, which means many of the challenges in telling a VR story are closer to UX design than anything from film or theater.

Take the issue of frameless scenes. In a VR experience, there are no borders, and no guarantees where a user will look. Scenes must be designed to attract user attention, in order to guide them through the experience of a story.

Sound design, staging cues, lighting effects, and movement can all be used to draw a user’s attention.

However, it’s a fine balance between attraction to distraction.

“In VR, it’s easy to overwhelm the user. If you see a flashing light and in the background, you hear a sharp siren, and then something moves, you’ve given the user too many things to understand,” says Di Dang, User Experience Lead at POP, Seattle. “Be intentional and deliberate about how you grab audience attention.”

 

VR is a storytelling superpower. No other medium has the quite the same potential to create empathy and drive human connection. Because viewers are for all intents and purposes living the experience, they walk away with that history coded into their memory banks—easily accessible for future responses.

 

 

 

Google’s latest VR experiment is teaching people how to make coffee — from techradar.com by Parker Wilhelm
All in a quest to see how effective learning in virtual reality is

Excerpt:

Teaching with a simulation is no new concept, but Google’s Daydream Labs wants to see exactly how useful virtual reality can be for teaching people practical skills.

In a recent experiment, Google ran a simulation of an interactive espresso machine in VR. From there, it had a group of people try their virtual hand at brewing a cup of java before being tasked to make the real thing.

 

 



 

Addendum on 7/26/17:

 



 

 

 

Google’s AI Guru Says That Great Artificial Intelligence Must Build on Neuroscience — from technologyreview.com by Jamie Condliffe
Inquisitiveness and imagination will be hard to create any other way.

Excerpt:

Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014. Since then, his company has wiped the floor with humans at the complex game of Go and begun making steps towards crafting more general AIs.

But now he’s come out and said that be believes the only way for artificial intelligence to realize its true potential is with a dose of inspiration from human intellect.

Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. But different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.

Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.

 

First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. 

 

From DSC:
Glory to God! I find it very interesting to see how people and organizations — via very significant costs/investments — keep trying to mimic the most amazing thing — the human mind. Turns out, that’s not so easy:

But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem…

Therefore, some scripture comes to my own mind here:

Psalm 139:14 New International Version (NIV)

14 I praise you because I am fearfully and wonderfully made;
    your works are wonderful,
    I know that full well.

Job 12:13 (NIV)

13 “To God belong wisdom and power;
    counsel and understanding are his.

Psalm 104:24 (NIV)

24 How many are your works, Lord!
    In wisdom you made them all;
    the earth is full of your creatures.

Revelation 4:11 (NIV)

11 “You are worthy, our Lord and God,
    to receive glory and honor and power,
for you created all things,
    and by your will they were created
    and have their being.”

Yes, the LORD designed the human mind by His unfathomable and deep wisdom and understanding.

Glory to God!

Thanks be to God!

 

 

 

The Business of Artificial Intelligence — from hbr.org by Erik Brynjolfsson & Andrew McAfee

Excerpts (emphasis DSC):

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

The machine learns from examples, rather than being explicitly programmed for a particular outcome.

 

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. …For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards. 

 

 

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. 

 

 

You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names.

 

 

 

Google is turning Street View imagery into pro-level landscape photographs using artificial intelligence — from businessinsider.com by Edoardo Maggio

Excerpt:

A new experiment from Google is turning imagery from the company’s Street View service into impressive digital photographs using nothing but artificial intelligence (AI).

Google is using machine learning algorithms to train a deep neural network to roam around places such as Canada’s and California’s national parks, look for potentially suitable landscape images, and then work on them with special post-processing techniques.

The idea is to “mimic the workflow of a professional photographer,” and to do so Google is relying on so-called generative adversarial networks (GAN), which essentially pit two neural networks against one another.

 

See also:

Using Deep Learning to Create Professional-Level Photographs — from research.googleblog.com by Hui Fang, Software Engineer, Machine Perception

 

 

McKinsey’s State Of Machine Learning & AI, 2017 — from forbes.com by Louis Columbus

Excerpts:

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Connecting more Americans with jobs — from blog.google by Nick Zakrasek

Excerpt:

We have a long history of using our technology to connect people with crucial information. At I/O, we announced Google for Jobs, a company-wide initiative focused on helping both job seekers and employers, through deep collaboration with the job matching industry. This effort includes the Cloud Jobs API, announced last year, which provides access to Google’s machine learning capabilities to power smarter job search and recommendations within career sites, jobs boards, and other job matching sites and apps. Today, we’re taking the next step in the Google for Jobs initiative by putting the convenience and power of Search into the hands of job seekers. With this new experience, we aim to connect Americans to job opportunities across the U.S., so no matter who you are or what kind of job you’re looking for, you can find job postings that match your needs.

 

 

How to Use Google for Jobs to Rock Your Career — from avidcareerist.com by Donna Svei

Excerpt:

How Does Google for Jobs Work?
Let me walk you through an example.

Go to your Google search bar.
Enter your preferred job title, followed by the word jobs, and your preferred location. Like this:

 

 

Google launches its AI-powered jobs search engine — from techcrunch.com by Frederic Lardinois

Excerpt:

Looking for a new job is getting easier. Google today launched a new jobs search feature right on its search result pages that lets you search for jobs across virtually all of the major online job boards like LinkedIn, Monster, WayUp, DirectEmployers, CareerBuilder and Facebook and others. Google will also include job listings its finds on a company’s homepage.

The idea here is to give job seekers an easy way to see which jobs are available without having to go to multiple sites only to find duplicate postings and lots of irrelevant jobs.

 

 

Google for Jobs Could Save You Time on Your Next Job Search — from lifehacker.comby Patrick Allan

Excerpt:

Google launched its new Google for Jobs feature today, which uses their machine learning Cloud API to put job listings from all the major job service sites in one easy-to-search place.

 

 

 

 

2017 Internet Trends Report — from kpcb.com by Mary Meeker

 

 

Mary Meeker’s 2017 internet trends report: All the slides, plus analysis — from recode.net by Rani Molla
The most anticipated slide deck of the year is here.

Excerpt:

Here are some of our takeaways:

  • Global smartphone growth is slowing: Smartphone shipments grew 3 percent year over year last year, versus 10 percent the year before. This is in addition to continued slowing internet growth, which Meeker discussed last year.
  • Voice is beginning to replace typing in online queries. Twenty percent of mobile queries were made via voice in 2016, while accuracy is now about 95 percent.
  • In 10 years, Netflix went from 0 to more than 30 percent of home entertainment revenue in the U.S. This is happening while TV viewership continues to decline.
  • China remains a fascinating market, with huge growth in mobile services and payments and services like on-demand bike sharing. (More here: The highlights of Meeker’s China slides.)

 

 

Read Mary Meeker’s essential 2017 Internet Trends report — from techcrunch.com by Josh Constine

Excerpt:

This is the best way to get up to speed on everything going on in tech. Kleiner Perkins venture partner Mary Meeker’s annual Internet Trends report is essentially the state of the union for the technology industry. The widely anticipated slide deck compiles the most informative research on what’s getting funded, how Internet adoption is progressing, which interfaces are resonating, and what will be big next.

You can check out the 2017 report embedded below, and here’s last year’s report for reference.

 

 

The Slickest Things Google Debuted [on 5/17/17] at Its Big Event — from wired.com by Arielle Pardes

Excerpt (emphasis DSC):

At this year’s Google I/O, the company’s annual developer conference and showcase, CEO Sundar Pichai made one thing very clear: Google is moving toward an AI-first approach in its products, which means pretty soon, everything you do on Google will be powered by machine learning. During Wednesday’s keynote speech, we saw that approach seep into all of Google’s platforms, from Android to Gmail to Google Assistant, each of which are getting spruced up with new capabilities thanks to AI. Here’s our list of the coolest things Google announced today.

 

 

Google Lens Turns Your Camera Into a Search Box — from wired.com by David Pierce

Excerpt:

Google is remaking itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it’s still primarily a search company. And [on 5/17/17] at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or “it’s called Golden Corral,” which you also know. It can automatically find you the hours, or call up the menu, or see if there’s a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you’ll learn that it’s an Elatior Begonia, and that it really needs indirect, bright light to survive. It’s a full-fledged search engine, starting with your camera instead of a text box.

 

 

Google’s AI Chief On Teaching Computers To Learn–And The Challenges Ahead — from fastcompany.com by Harry McCracken
When it comes to AI technologies such as machine learning, Google’s aspirations are too big for it to accomplish them all itself.

Excerpt:

“Last year, we talked about becoming an AI-first company and people weren’t entirely sure what we meant,” he told me. With this year’s announcements, it’s not only understandable but tangible.

“We see our job as evangelizing this new shift in computing,” Giannandrea says.


Matching people with jobs
Pichai concluded the I/O keynote by previewing Google for Jobs, an upcoming career search engine that uses machine learning to understand job listings–a new approach that is valuable, Giannandrea says, even though looking for a job has been a largely digital activity for years. “They don’t do a very good job of classifying the jobs,” Giannandrea says. “It’s not just that I’m looking for part-time work within five miles of my house–I’m looking for an accounting job that involves bookkeeping.”

 

 

Google Assistant Comes to Your iPhone to Take on Siri — from wired.com by David Pierce

 

 

Google rattles the tech world with a new AI chip for all — from wired.com by Cade Metz

 

 

I/O 2017 Recap — from Google.com

 

 

The most important announcements from Google I/O 2017! — from androidcentral.com by Alex Dobie

 

 

Google IO 2017: All the announcements in one place! — from androidauthority.com by Kris Carlon

 

 

 

 

EON CREATOR AVR

The EON Creator AVR Enterprise and Education content builder empowers non-technical users to create compelling AR and VR applications in minutes, not weeks.

ENTERPRISE
With no programming required, EON Creator AVR Enterprise empowers workers to accelerate learning and improve performance, safety, and efficiency in the workplace.

EDUCATION
Teachers and students can create, experience, and share AVR learning applications with EON Creator AVR and quickly add them to their current classroom, seamlessly.

 

 

 

 

Also see:

 

 

 

Also see:

 

 

 
© 2024 | Daniel Christian