The Jarvish X is the more basic of the two models. It offers integrated microphones and speakers for Siri, Google Assistant, and Alexa support so wearers have access things like directions, weather updates, and control music through voice control. There’s also a 2K, front-facing camera built into the helmet so you can record your ride. It’s set to cost $799 when it hits Kickstarter in January.
From DSC: Microsoft’s conference room of the future“listens” to the conversations of the team and provides a transcript of the meeting. It also is using “artificial intelligence tools to then act on what meeting participants say. If someone says ‘I’ll follow up with you next week,’ then they’ll get a notification in Microsoft Teams, Microsoft’s Slack competitor, to actually act on that promise.”
This made me wonder about our learning spaces in the future. Will an #AI-based device/cloud-based software app — in real-time — be able to “listen” to the discussion in a classroom and present helpful resources in the smart classroom of the future (i.e., websites, online-based databases, journal articles, and more)?
Will this be a feature of a next generation learning platform as well (i.e., addressing the online-based learning realm)? Will this be a piece of an intelligent tutor or an intelligent system?
As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).
Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.
Skype chats are coming to Alexa devices — from engadget.com by Richard Lawlor Voice controlled internet calls to or from any device with Amazon’s system in it.
Excerpt:
Aside from all of the Alexa-connected hardware, there’s one more big development coming for Amazon’s technology: integration with Skype. Microsoft and Amazon said that voice and video calls via the service will come to Alexa devices (including Microsoft’s Xbox One) with calls that you can start and control just by voice.
Echo HomePod? Amazon wants you to build your own— by Brian Heater One of the bigger surprises at today’s big Amazon event was something the company didn’t announce. After a couple of years of speculation that the company was working on its own version of the Home…
Called Alexa Presentation Language, or APL, developers will be able to build voice-based apps that also include things like images, graphics, slideshows and video, and easily customize them for different device types – including not only the Echo Show, but other Alexa-enabled devices like Fire TV, Fire Tablet, and the small screen of the Alexa alarm clock, the Echo Spot.
From DSC: This is a great move by Amazon — as NLP and our voices become increasingly important in how we “drive” and utilize our computing devices.
The business plan from here is clear: Companies pay a premium to be activated when users pose questions related to their products and services. “How do you cook an egg?” could pull up a Food Network tutorial; “How far is Morocco?” could enable the Expedia app.
The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.
The open-source tool is free for download now on Oculus and Steam.
Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.
“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.
From DSC: While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.
Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.
Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way.
What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.
2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.
DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.
Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.
The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.
Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.
Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.
Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.
There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.
But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.
New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.
The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.
In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.
This article will set out to determine the answers to the following questions:
What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
How is the healthcare market implementing these AI applications?
In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.
The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.
“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.
On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”
In this report, we set out to capture a snapshot of the exponential progress in AI with a focus on developments in the past 12 months. Consider this report as a compilation of the most interesting things we’ve seen that seeks to trigger informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Talent: Supply, demand and concentration of talent working in the field.
Industry: Large platforms, financings and areas of application for AI-driven innovation today and tomorrow.
Politics: Public opinion of AI, economic implications and the emerging geopolitics of AI.
As I write this, AI has already begun to make video meetings even better. You no longer have to spend time entering codes or clicking buttons to launch a meeting. Instead, with voice-based AI, video conference users can start, join or end a meeting by simply speaking a command (think about how you interact with Alexa).
Voice-to-text transcription, another artificial intelligence feature offered by Otter Voice Meeting Notes (from AISense, a Zoom partner), Voicefox and others, can take notes during video meetings, leaving you and your team free to concentrate on what’s being said or shown. AI-based voice-to-text transcription can identify each speaker in the meeting and save you time by letting you skim the transcript, search and analyze it for certain meeting segments or words, then jump to those mentions in the script. Over 65% of respondents from the Zoom survey said they think AI will save them at least one hour a week of busy work, with many claiming it will save them one to five hours a week.
It is this noiseless sound, though, that says a lot about how machines function.
Helsinki-based Noiseless Acoustics and Amsterdam-based OneWatt are relying on artificial intelligence (AI) to better understand the sound patterns of troubled machines. Through AI they are enabling faster and easier problem detection.
Making sound visible even when it can’t be heard. With the aid of non-invasive sensors, machine learning algorithms, and predictive maintenance solutions, failing components can be recognized at an early stage before they become a major issue.
A number of higher education institutions in China have deployed biometric solutions for access and payments in recent months, and adding to the list is Peking University. The university has now installed facial recognition readers at perimeter access gates to control access to its Beijing campus.
As reported by the South China Morning Post, anyone attempting to enter through the southwestern gate of the university will no longer have to provide a student ID card. Starting this month, students will present their faces to a camera as part of a trial run of the system ahead of full-scale deployment.
From DSC: I’m not sure I like this one at all — and the direction that this is going in.
How can we get smarter about machine learning?
As I said earlier, we’ve reached an important crossroads. Will we use new technologies to improve life for everyone, or to fuel the agendas of powerful people and organizations?
I certainly hope it’s the former. Few of us will run for president or lead a social media empire, but we can all help to move the needle.
Consume information with a critical eye.
Most people won’t stop using Facebook, Google, or social media platforms, so proceed with a healthy dose of skepticism. Remember that the internet can never be objective. Ask questions and come to your own conclusions.
Get your headlines from professional journalists.
Seek credible outlets for news about local, national and world events. I rely on the New York Times and the Wall Street Journal. You can pick your own sources, but don’t trust that the “article” your Aunt Marge just posted on Facebook is legit.
In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.
Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.
Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.
It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.
Near-Futures Scenarios (2018 – 2028):
Optimistic: Faceprints make us safer, and they bring us back to physical offices and stores.
Pragmatic:As faceprint adoption grows, legal challenges mount.
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.
Catastrophic: Faceprints are used for widespread surveillance and authoritative control.
How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.
The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.
This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.
…
In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.
“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.
We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.
Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.
A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.
What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.
The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.
Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.
Below are four key categories of pragmatic AI and ways they are being applied today.
1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots
Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.
…
Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.
HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.
At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.
Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.
To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.
Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.
The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.
With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.
That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”
To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.
There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.
The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.
I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.
Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.
…
VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.
David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.
These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.
London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.
Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.
A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.
Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.
Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.
At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.
Below are some excerpted slides from her presentation…
Also see:
20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
Excerpt:
Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.
“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.
Google’s virtual assistant can now make phone calls on your behalf to schedule appointments, make reservations in restaurants and get holiday hours.
The robotic assistant uses a very natural speech pattern that includes hesitations and affirmations such as “er” and “mmm-hmm” so that it is extremely difficult to distinguish from an actual human phone call.
The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.
About a dozen Google employees reportedly left the company over its insistence on developing AI for the US military through a program called Project Maven. Meanwhile 4,000 others signed a petition demanding the company stop.
It looks like there’s some internal confusion over whether the company’s “Don’t Be Evil” motto covers making machine learning systems to aid warfare.
FOR MONTHS, A growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says. On Friday, employees communicating through an internal mailing list discussed refusing to interview job candidates in order to slow the project’s progress.
Other tech giants have recently secured high-profile contracts to build technology for defense, military, and intelligence agencies. In March, Amazon expanded its newly launched “Secret Region” cloud services supporting top-secret work for the Department of Defense. The same week that news broke of the Google resignations, Bloomberg reported that Microsoft locked down a deal with intelligence agencies. But there’s little sign of the same kind of rebellion among Amazon and Microsoft workers.
Facebook SEATTLE (AP) – The American Civil Liberties Union and other privacy advocates are asking Amazon to stop marketing a powerful facial recognition tool to police, saying law enforcement agencies could use the technology to “easily build a system to automate the identification and tracking of anyone.”
The tool, called Rekognition, is already being used by at least one agency – the Washington County Sheriff’s Office in Oregon – to check photographs of unidentified suspects against a database of mug shots from the county jail, which is a common use of such technology around the country.
From DSC: Google’s C-Suite — as well as the C-Suites at Microsoft, Amazon, and other companies — needs to be very careful these days, as they could end up losing the support/patronage of a lot of people — including more of their own employees. It’s not an easy task to know how best to build and use technologies in order to make the world a better place…to create a dream vs. a nightmare for our future. But just because we can build something, doesn’t mean we should.
What is conversational commerce? Why is it such a big opportunity? How does it work? What does the future look like? How can I get started? These are the questions I’m going to answer for you right now.
…
The guide covers:
An introduction to conversational commerce.
Why conversational commerce is such a big opportunity.
Complete breakdown of how conversational commerce works.
Extensive examples of conversational commerce using chatbots and voicebots.
How artificial intelligence impacts conversational commerce.
What the future of conversational commerce will look like.
Definition: Conversational commerce is an automated technology, powered by rules and sometimes artificial intelligence, that enables online shoppers and brands to interact with one another via chat and voice interfaces.
Artificial intelligence (AI) stands out as a transformational technology of our digital age—and its practical application throughout the economy is growing apace. For this briefing, Notes from the AI frontier: Insights from hundreds of use cases (PDF–446KB), we mapped both traditional analytics and newer “deep learning” techniques and the problems they can solve to more than 400 specific use cases in companies and organizations. Drawing on McKinsey Global Institute research and the applied experience with AI of McKinsey Analytics, we assess both the practical applications and the economic potential of advanced AI techniques across industries and business functions. Our findings highlight the substantial potential of applying deep learning techniques to use cases across the economy, but we also see some continuing limitations and obstacles—along with future opportunities as the technologies continue their advance. Ultimately, the value of AI is not to be found in the models themselves, but in companies’ abilities to harness them.
It is important to highlight that, even as we see economic potential in the use of AI techniques, the use of data must always take into account concerns including data security, privacy, and potential issues of bias.
AI for Good — from re-work.co by Ali Shah, Head of Emerging Technology and Strategic Direction – BBC
Excerpt:
What AI for good is really trying to ask is how we might develop and apply AI so that it makes a positive difference to society. Since the material question is about the change in society we would like to see, then we must first define the change we are hoping for before we can judge how AI might help. There are many areas of society that we might choose to consider, but I will focus on two interrelated issues.
Microsoft just demonstrated a meeting room of the future at the company’s Build developer conference.
…
It all starts with a 360-degree camera and microphone array that can detect anyone in a meeting room, greet them, and even transcribe exactly what they say in a meeting regardless of language.
…
Microsoft takes the meeting room scenario even further, though. The company is using its artificial intelligence tools to then act on what meeting participants say.
From DSC: Whoa! Many things to think about here. Consider the possibilities for global/blended/online-based learning (including MOOCs) with technologies associated with translation, transcription, and identification.
Technology has conditioned workers to expect quick and easy experiences — from Google searches to help from voice assistants — so they can get the answers they need and get back to work. While the concept of “on-demand” learning is not new, it’s been historically tough to deliver, and though most learning and development departments have linear e-learning modules or traditional classroom experiences, today’s learners are seeking more performance-adjacent, “point-of-need” models that fit into their busy, fast-paced work environments.
Enter emerging technologies. Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work. What’s more, these technologies are emerging at a consumer-level, meaning HR’s lift in implementing them into L&D may not be substantial. Consider the technologies we already use regularly — voice assistants like Alexa, Siri and Google Assistant may be available in 55 percent of homes by 2022, providing instant, seamless access to information we need on the spot. While asking a home assistant for the weather, the best time to leave the house to beat traffic or what movies are playing at a local theater might not seem to have much application in the workplace, this nonlinear, point-of-need interaction is already playing out across learning platforms.
Artificial intelligence, voice interfaces and augmented reality, when applied correctly, have the potential to radically change the nature of how we learn at work.
As computer algorithms become more advanced, artificial intelligence (AI) increasingly has grown prominent in the workplace. Top news organizations now use AI for a variety of newsroom tasks.
…
But current AI systems largely are still dependent on humans to function correctly, and the most pressing concern is understanding how to correctly operate these systems as they continue to thrive in a variety of media-related industries.
…
So, while [Machine Learning] systems soon will become ubiquitous in many professions, they won’t replace the professionals working in those fields for some time — rather, they will become an advanced tool that will aid in decision making. This is not to say that AI will never endanger human jobs. Automation always will find a way.
From DSC: While I don’t find this article to be exemplary, I post this one mainly to encourage innovative thinking about how we might use some of these technologies in our future learning ecosystems.
From DSC: Along these lines, will faculty use their voices to control their room setups (i.e., the projection, shades, room lighting, what’s shown on the LMS, etc.)?
Or will machine-to-machine communications, the Internet of Things, sensors, mobile/cloud-based apps, and the like take care of those items automatically when a faculty member walks into the room?
From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).
The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.
One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities,while also making it easier to connect with loved ones.
Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.
“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.
Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.
“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.
From DSC:
I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future.