Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

‘The Beginning of a Wave’: A.I. Tiptoes Into the Workplace — from nytimes.com by Steve Lohr

Excerpt:

There is no shortage of predictions about how artificial intelligence is going to reshape where, how and if people work in the future.

But the grand work-changing projects of A.I., like self-driving cars and humanoid robots, are not yet commercial products. A more humble version of the technology, instead, is making its presence felt in a less glamorous place: the back office.

New software is automating mundane office tasks in operations like accounting, billing, payments and customer service. The programs can scan documents, enter numbers into spreadsheets, check the accuracy of customer records and make payments with a few automated computer keystrokes.

The technology is still in its infancy, but it will get better, learning as it goes. So far, often in pilot projects focused on menial tasks, artificial intelligence is freeing workers from drudgery far more often than it is eliminating jobs.

 

 

AI for Virtual Medical Assistants – 4 Current Applications — from techemergence.com by Kumba Sennaar

Excerpt:

In an effort to reduce the administrative burden of medical transcription and clinical documentation, researchers are developing AI-driven virtual assistants for the healthcare industry.

This article will set out to determine the answers to the following questions:

  • What types of AI applications are emerging to improve management of administrative tasks, such as logging medical information and appointment notes, in the medical environment?
  • How is the healthcare market implementing these AI applications?

 

Amazon’s Facial Recognition Wrongly Identifies 28 Lawmakers, A.C.L.U. Says — from nytimes.com by Natasha Singer

Excerpt:

In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators.

The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

“This test confirms that facial recognition is flawed, biased and dangerous,” said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California.

On Thursday afternoon, three of the misidentified legislators — Senator Edward J. Markey of Massachusetts, Representative Luis V. Gutiérrez of Illinois and Representative Mark DeSaulnier of California, all Democrats — followed up with a letter to Jeff Bezos, the chief executive of Amazon, saying there are “serious questions regarding whether Amazon should be selling its technology to law enforcement at this time.”

 

Back from January:

 

 

 

Responsibility & AI: ‘We all have a role when it comes to shaping the future’ — from re-work.co by Fiona McEvoy

Excerpt:

As we slowly begin to delegate tasks that have until now been the sole purview of human judgment, there is understandable trepidation amongst some factions. Will creators build artificially intelligent machines that act in accordance with our core human values? Do they know what these moral imperatives are and when they are relevant? Are makers thoroughly stress-testing deep learning systems to ensure ethical decision-making? Are they trying to understand how AI can challenge key principles, like dignity and respect?

All the time we are creating new dependencies, and placing increasing amounts of faith in the engineers, programmers and designers responsible for these systems and platforms.

For reasons that are somewhat understandable, at present much of this tech ethics talk happens behind closed doors, and typically only engages a handful of industry and academic voices. Currently, these elite figures are the only participants in a dialogue that will determine all of our futures. At least in part, I started YouTheData.com because I wanted to bring “ivory tower” discussions down to the level of the engaged consumer, and be part of efforts to democratize this particular consultation process. As a former campaigner, I place a lot of value in public awareness and scrutiny.

To be clear, the message I wish to convey is not a criticism of the worthy academic and advisory work being done in this field (indeed, I have some small hand in this myself). It’s about acknowledging that engineers, technologists – and now ethicists, philosophers and others – still ultimately need public assent and a level of consumer “buy in” that is only really possible when complex ideas are made more accessible.

 

 

Digital Surgery’s AI platform guides surgical teams through complex procedures — from venturebeat.com by Kyle Wiggers

Excerpt:

Digital Surgery, a health tech startup based in London, today launched what it’s calling the world’s first dynamic artificial intelligence (AI) system designed for the operating room. The reference tool helps support surgical teams through complex medical procedures — cofounder and former plastic surgeon Jean Nehme described it as a “Google Maps” for surgery.

“What we’ve done is applied artificial intelligence … to procedures … created with surgeons globally,” he told VentureBeat in a phone interview. “We’re leveraging data with machine learning to build a [predictive] system.”

 

 

Why business Lleaders need to embrace artificial intelligence — from thriveglobal.com by Howard Yu
How companies should work with AI—not against it.

 

 

 

 

Computing in the Camera — from blog.torch3d.com by Paul Reynolds
Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create.

One of the points Allison [Woods, CEO, Camera IQ] made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it. Simple, obvious, straightforward, but not earth shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.

 

“[T]he camera will bring the internet and the real world into a single time and space.”

— Allison Woods, CEO, Camera IQ

 

 

The Camera As Platform — from shift.newco.co by Allison Wood
When the operating system moves to the viewfinder, the world will literally change

“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”

– Jacob Mullins, Shasta Ventures

 

 

 

The State Of The ARt At AWE 18 — from forbes.com by Charlie Fink

Excerpt:

The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Use cases abounded.

After traveling the country and most recently to Europe, I’ve now experienced almost every major VR/AR/MR/XR related conference out there. AWE’s exhibit area was by far the largest display of VR and AR companies to date (with the exception of CES).

 

AR is being used to identify features and parts within cars

 

 

 

 

Student Learning and Virtual Reality: The Embodied Experience — from er.educause.edu by Jaime Hannans, Jill Leafstedt and Talya Drescher

Excerpts:

Specifically, we explored the potential for how virtual reality can help create a more empathetic nurse, which, we hypothesize, will lead to increased development of nursing students’ knowledge, skills, and attitudes. We aim to integrate these virtual experiences into early program coursework, with the intent of changing nursing behavior by providing a deeper understanding of the patient’s perspective during clinical interactions.

In addition to these compelling student reflections and the nearly immediate change in reporting practice, survey findings show that students unanimously felt that this type of patient-perspective VR experience should be integrated and become a staple of the nursing curriculum. Seeing, hearing, and feeling these moments results in significant and memorable learning experiences compared to traditional classroom learning alone. The potential that this type of immersive experience can have in the field of nursing and beyond is only limited by the imagination and creation of other virtual experiences to explore. We look forward to continued exploration of the impact of VR on student learning and to establishing ongoing partnerships with developers.

 

Also see:

 

 

 

Computers that never forget a face — from Future Today Institute

Excerpts:

In August, the U.S. Customs and Border Protection will roll out new technology that will scan the faces of drivers as they enter and leave the United States. For years, accomplishing that kind of surveillance through a car windshield has been difficult. But technology is quickly advancing. This system, activated by ambient light sensors, range finders and remote speedometers, uses smart cameras and AI-powered facial recognition technology to compare images in government files with people behind the wheel.

Biometric borders are just the beginning. Faceprints are quickly becoming our new fingerprints, and this technology is marching forward with haste. Faceprints are now so advanced that machine learning algorithms can recognize your unique musculatures and bone structures, capillary systems, and expressions using thousands of data points. All the features that make up a unique face are being scanned, captured and analyzed to accurately verify identities. New hairstyle? Plastic surgery? They don’t interfere with the technology’s accuracy.

Why you should care. Faceprints are already being used across China for secure payments. Soon, they will be used to customize and personalize your digital experiences. Our Future Today Institute modeling shows myriad near-future applications, including the ability to unlock your smart TV with your face. Retailers will use your face to personalize your in-store shopping experience. Auto manufacturers will start using faceprints to detect if drivers are under the influence of drugs or alcohol and prevent them from driving. It’s plausible that cars will soon detect if a driver is distracted and take the wheel using an auto-pilot feature. On a diet but live with others? Stash junk food in a drawer and program the lock to restrict your access. Faceprints will soon create opportunities for a wide range of sectors, including military, law enforcement, retail, manufacturing and security. But as with all technology, faceprints could lead to the loss of privacy and widespread surveillance.

It’s possible for both risk and opportunity to coexist. The point here is not alarmist hand-wringing, or pointless calls for cease-and-desist demands on the development and use of faceprint technology. Instead, it’s to acknowledge an important emerging trend––faceprints––and to think about the associated risks and opportunities for you and your organization well in advance. Approach biometric borders and faceprints with your (biometrically unique) eyes wide open.

Near-Futures Scenarios (2018 – 2028):

OptimisticFaceprints make us safer, and they bring us back to physical offices and stores.  

Pragmatic: As faceprint adoption grows, legal challenges mount. 
In April, a U.S. federal judge ruled that Facebook must confront a class-action lawsuit that alleges its faceprint technology violates Illinois state privacy laws. Last year, a U.S. federal judge allowed a class-action suit to go forth against Shutterfly, claiming the company violated the Illinois Biometric Information Privacy Act, which ensures companies receive written releases before collecting biometric data, including faces. Companies and device manufacturers, who are early developers but late to analyzing legal outcomes, are challenged to balance consumer privacy with new security benefits.

CatastrophicFaceprints are used for widespread surveillance and authoritative control.

 

 

 

How AI is helping sports teams scout star play — from nbcnews.com by Edd Gent
Professional baseball, basketball and hockey are among the sports now using AI to supplement traditional coaching and scouting.

 

 

 

Preparing students for workplace of the future  — from educationdive.com by Shalina Chatlani

Excerpt:

The workplace of the future will be marked by unprecedentedly advanced technologies, as well as a focus on incorporating artificial intelligence to drive higher levels of production with fewer resources. Employers and education stakeholders, noting the reality of this trend, are turning a reflective eye toward current students and questioning whether they will be workforce ready in the years to come.

This has become a significant concern for higher education executives, who find their business models could be disrupted as they fail to meet workforce demands. A 2018 Gallup-Northeastern University survey shows that of 3,297 U.S. citizens interviewed, only 22% with a bachelor’s degree said their education left them “well” or “very well prepared” to use AI in their jobs.

In his book “Robot-Proof: Higher Education in the Age of Artificial Intelligence,” Northeastern University President Joseph Aoun argued that for higher education to adapt advanced technologies, it has to focus on life-long learning, which he said says prepares students for the future by fostering purposeful integration of technical literacies, such as coding and data literacy, with human literacies, such as creativity, ethics, cultural agility and entrepreneurship.

“When students combine these literacies with experiential components, they integrate their knowledge with real life settings, leading to deep learning,” Aoun told Forbes.

 

 

Amazon’s A.I. camera could help people with memory loss recognize old friends and family — from cnbc.com by Christina Farr

  • Amazon’s DeepLens is a smart camera that can recognize objects in front of it.
  • One software engineer, Sachin Solkhan, is trying to figure out how to use it to help people with memory loss.
  • Users would carry the camera to help them recognize people they know.

 

 

Microsoft acquired an AI startup that helps it take on Google Duplex — from qz.com by Dave Gershgorn

Excerpt:

We’re going to talk to our technology, and everyone else’s too. Google proved that earlier this month with a demonstration of artificial intelligence that can hop on the phone to book a restaurant reservation or appointment at the hair salon.

Now it’s just a matter of who can build that technology fastest. To reach that goal, Microsoft has acquired conversational AI startup Semantic Machines for an undisclosed amount. Founded in 2014, the startup’s goal was to build AI that can converse with humans through speech or text, with the ability to be trained to converse on any language or subject.

 

 

Researchers developed an AI to detect DeepFakes — from thenextweb.com by Tristan Greene

Excerpt:

A team of researchers from the State University of New York (SUNY) recently developed a method for detecting whether the people in a video are AI-generated. It looks like DeepFakes could meet its match.

What it means: Fear over whether computers will soon be able to generate videos that are indistinguishable from real footage may be much ado about nothing, at least with the currently available methods.

The SUNY team observed that the training method for creating AI that makes fake videos involves feeding it images – not video. This means that certain human physiological quirks – like breathing and blinking – don’t show up in computer-generated videos. So they decided to build an AI that uses computer vision to detect blinking in fake videos.

 

 

Bringing It Down To Earth: Four Ways Pragmatic AI Is Being Used Today — from forbes.com by Carlos Melendez

Excerpt:

Without even knowing it, we are interacting with pragmatic AI day in and day out. It is used in the automated chatbots that answer our calls and questions and the customer service rep that texts with us on a retail site, providing a better and faster customer experience.

Below are four key categories of pragmatic AI and ways they are being applied today.

1. Speech Recognition And Natural Language Processing (NLP)
2. Predictive Analytics
3. Image Recognition And Computer Vision
4. Self-Driving Cars And Robots

 

 

Billable Hour ‘Makes No Sense’ in an AI World — from biglawbusiness.com by Helen Gunnarsson

Excerpt:

Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

 

 

Deep Learning Tool Tops Dermatologists in Melanoma Detection — from healthitanalytics.com
A deep learning tool achieved greater accuracy than dermatologists when detecting melanoma in dermoscopic images.

 

 

Apple’s plans to bring AI to your phone — from wired.com by Tom Simonite

Excerpt:

HomeCourt is built on tools announced by Federighi last summer, when he launched Apple’s bid to become a preferred playground for AI-curious developers. Known as Core ML, those tools help developers who’ve trained machine learning algorithms deploy them on Apple’s mobile devices and PCs.

At Apple’s Worldwide Developer Conference on Monday, Federighi revealed the next phase of his plan to enliven the app store with AI. It’s a tool called Create ML that’s something like a set of training wheels for building machine learning models in the first place. In a demo, training an image-recognition algorithm to distinguish different flavors of ice cream was as easy as dragging and dropping a folder containing a few dozen images and waiting a few seconds. In a session for developers, Apple engineers suggested Create ML could teach software to detect whether online comments are happy or angry, or predict the quality of wine from characteristics such as acidity and sugar content. Developers can use Create ML now but can’t ship apps using the technology until Apple’s latest operating systems arrive later this year.

 

 

 

Welcome to Law2020: Artificial Intelligence and the Legal Profession — from abovethelaw.com by David Lat and Brian Dalton
What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world?

Excerpt:

Artificial intelligence has been declared “[t]he most important general-purpose technology of our era.” It should come as no surprise to learn that AI is transforming the legal profession, just as it is changing so many other fields of endeavor.

What do AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world? Will AI automate the work of attorneys — or will it instead augment, helping lawyers to work more efficiently, effectively, and ethically?

 

 

 

 

How artificial intelligence is transforming the world — from brookings.edu by Darrell M. West and John R. Allen

Summary

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents

I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion


In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

 

 

Seven Artificial Intelligence Advances Expected This Year  — from forbes.com

Excerpt:

Artificial intelligence (AI) has had a variety of targeted uses in the past several years, including self-driving cars. Recently, California changed the law that required driverless cars to have a safety driver. Now that AI is getting better and able to work more independently, what’s next?

 

 

Google Cofounder Sergey Brin Warns of AI’s Dark Side — from wired.com by Tom Simonite

Excerpt (emphasis DSC):

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

 

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

 

 

 

 

The 50 Best Augmented Reality Apps for iPhone, iPad & Android Devices — from next.reality.news by Tommy Palladino

Excerpt:

Complete Anatomy 2018 +Courses (iOS): Give your preschoolers a head start on their education! Okay, clearly this app is meant for more advanced learners. Compared to the average app, you’ll end up paying through the nose with in-app purchases, but it’s really a drop in the bucket compared to the student loans students will accumulate in college. Price: Free with in-app purchases ranging from $0.99 to $44.99.

SkyView (iOS & Android): If I can wax nostalgic for a bit, I recall one of the first mobile apps that wowed me being Google’s original SkyView app. Now you can bring back that feeling with some augmented reality. With SkyView, you can point your phone to the sky and the app will tell you what constellations or other celestial bodies you are looking at. Price: $1.99, but there’s a free version for iOS and Android.

JigSpace (iOS): JigSpace is an app dedicated to showing users how things work (the human body, mechanical objects, etc.). And the app recently added how-to info for those who WonderHowTo do other things as well. JigSpace can now display its content in augmented reality as well, which is a brilliant application of immersive content to education. Price: Free.

NY Times (iOS & Android): The New York Times only recently adopted augmented reality as a means for covering the news, but already we’ve had the chance to see Olympic athletes and David Bowie’s freaky costumes up close. That’s a pretty good start! Price: Free with in-app purchases ranging from $9.99 to $129.99 for subscriptions.

BBC Civilisations (iOS & Android): Developed as a companion to the show of the same name, this app ends up holding its own as an AR app experience. Users can explore digital scans of ancient artifacts, learn more about their significance, and even interact with them. Sure, Indiana Jones would say this stuff belongs in a museum, but augmented reality lets you view them in your home as well. Price: Free.

SketchAR (iOS, Android, & Windows): A rare app that works on the dominant mobile platforms and HoloLens, Sketch AR helps users learn how to draw. Sketch AR scans your environment for your drawing surface and anchors the content there as you draw around it. As you can imagine, the app works best on HoloLens since it keeps users’ hands free to draw. Price: Free.

 

 

Sun Seeker (iOS & Android): This app displays the solar path, hour intervals, and more in augmented reality. While this becomes a unique way to teach students about the Earth’s orbit around the sun (and help refute silly flat-earthers), it can also be a useful tool for professionals. For instance, it can help photographers plan a photoshoot and see where sunlight will shine at certain times of the day. Price: $9.99.

Froggipedia (iOS): Dissecting a frog is basically a rite of passage for anyone who has graduated from primary school in the US within the past 50 years or so. Thanks to augmented reality, we can now save precious frog lives while still learning about their anatomy. The app enables users to dissect virtual frogs as if they are on the table in front of them, and without the stench of formaldehyde. Price: $3.99.

GeoGebra Augmented Reality (iOS): Who needs a graphing calculator when you can visualize equations in augmented reality. That’s what GeoGebra does. The app is invaluable for visualizing graphs. Price: Free.

 

 

Addendum:

 

 

 

 

From DSC:
Check out the 2 items below regarding the use of voice as it pertains to using virtual assistants: 1 involves healthcare and the other involves education (Canvas).


1) Using Alexa to go get information from Canvas:

“Alexa Ask Canvas…”

Example questions as a student:

  • What grades am I getting in my courses?
  • What am I missing?

Example question as a teacher:

  • How many submissions do I need to grade?

See the section on asking Alexa questions…roughly between http://www.youtube.com/watch?v=e-30ixK63zE &t=38m18s through http://www.youtube.com/watch?v=e-30ixK63zE &t=46m42s

 

 

 

 


 

2) Why voice assistants are gaining traction in healthcare — from samsungnext.com by Pragati Verma

Excerpt (emphasis DSC):

The majority of intelligent voice assistant platforms today are built around smart speakers, such as the Amazon Echo and Google Home. But that might change soon, as several specialized devices focused on the health market are slated to be released this year.

One example is ElliQ, an elder care assistant robot from Samsung NEXT portfolio company Intuition Robotics. Powered by AI cognitive technology, it encourages an active and engaged lifestyle. Aimed at older adults aging in place, it can recognizing their activity level and suggest activities, while also making it easier to connect with loved ones.

Pillo is an example of another such device. It is a robot that combines machine learning, facial recognition, video conferencing, and automation to work as a personal health assistant. It can dispense vitamins and medication, answer health and wellness questions in a conversational manner, securely sync with a smartphone and wearables, and allow users to video conference with health care professionals.

“It is much more than a smart speaker. It is HIPAA compliant and it recognizes the user; acknowledges them and delivers care plans,” said Rogers, whose company created the voice interface for the platform.

Orbita is now working with toSense’s remote monitoring necklace to track vitals and cardiac fluids as a way to help physicians monitor patients remotely. Many more seem to be on their way.

“Be prepared for several more devices like these to hit the market soon,” Rogers predicted.

 

 


From DSC:

I see the piece about Canvas and Alexa as a great example of where a piece of our future learning ecosystems are heading towards — in fact, it’s been a piece of my Learning from the Living [Class] Room vision for a while now. The use of voice recognition/NLP is only picking up steam; look for more of this kind of functionality in the future. 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 


 

 

 

Pros and Cons of Virtual Reality in the Classroom — from chronicle.com by Adam Evans

Excerpt:

Armed with a lifelong affinity for video games and a $6,000 faculty teaching grant, I have spent the past 15 months working on a pilot project to illustrate the value of using virtual reality in the classroom. My goal is to convince fellow faculty members and administrators at Transylvania University, where I teach business administration, that VR can offer today’s tech-savvy students exciting opportunities to solve problems in new ways.

When I set up in-office demos for peers and students, they said they could not believe how immersive the technology felt. Expecting just another digital video game, they stepped into a dress rehearsal of the original Broadway cast of Hamilton or found themselves competing in the Winter Olympics in Pyeongchang.

There are major differences between virtual and augmented reality. The latter, which is less expensive to produce and already more prolific, is created by adding a digital element to the real world, such as a hologram one can view through a smartphone. Popular examples of this would be the Pokémon Go or the new Jurassic World Alive apps, which allow smartphone users to to find virtual characters that appear in physical locations. Users are still aware of the real space around them.

In contrast, virtual reality places the user inside a digitized world for a fully immersive experience. It generally costs more to design and typically requires more-expensive equipment, such as a full headset.

 

 

10 very cool augmented reality apps (that aren’t design or shopping tools) — from androidpolice.com by Taylor Kerns

Excerpt:

Augmented reality is having a moment on Android. Thanks to ARCore, which now works on more than a dozen device models—Google says that’s more than 100 million individual devices—we’ve seen a ton of new applications that insert virtual objects into our real surroundings. A lot of them are shopping and interior design apps, which makes sense—AR’s ability to make items appear in your home is a great way to see what a couch looks like in your living room without actually lugging it in there. But AR can do so much more. Here are 10 augmented reality apps that are useful, fascinating, or just plain cool.

 

 

 

The Wild and Amazing World of Augmented Reality — from askatechteacher.com by Jacqui Murray

Excerpt:

10 Ways to Use AR in the Classroom
I collected the best ways to use AR in the classroom from colleagues and edtech websites (like Edutopia) to provide a good overview of the depth and breadth of education now being addressed with AR-infused projects:

  • Book Reviews: Students record themselves giving a brief review of a novel that they just finished, and then attach digital information to a book. Afterward, anyone can scan the cover of the book and instantly access the review.
  • Classroom tour: Make a class picture image trigger a virtual tour of a classroom augmented reality
  • Faculty Photos: Display faculty photos where visitors can scan the image of an instructor and see it come to life with their background
  • Homework Mini-Lessons: Students scan homework to reveal information to help them solve a problem
  • Lab Safety: Put triggers around a science laboratory that students can scan to learn safety procedures
  • Parent Involvement: Record parents encouraging their child and attach a trigger image to the child’s desk
  • Requests: Trigger to a Google Form to request time with the teacher, librarian, or another professional
  • Sign Language Flashcards: Create flashcards that contain a video overlay showing how to sign a word or phrase
  • Word Walls: Students record themselves defining vocabulary words. Classmates scan them to get definitions and sentences using the word
  • Yearbooks: So many ways, just know AR will energize any yearbook

AR is the next great disruptive force in education. If your goal is to create lifelong learners inspired by knowledge, AR, in its infancy, holds the seeds for meeting that goal.

 

 

YouAR Out Of Stealth With AR Cloud Breakthrough — from forbes.com by Charlie Fink

Excerpt:

YouAR, of Portland, OR, is coming out of stealth with a product that addresses some of the most vexing problems in AR, including convergent cross-platform computer vision (real-time interaction between ARKit and ARCore devices), interactivity of multiple AR apps in the same location across devices, real-time scene mapping, geometric occlusion of digital objects, localization of devices beyond GPS (the AR Cloud), and the bundle drop of digital assets into remote locations. Together, this represents a heretofore unheard of stack of AR and computer vision features we have yet to see in AR, and could revolutionize the development of new apps.

 

 

 

12 Good Augmented Reality Apps to Use in Your Instruction — from educatorstechnology.com

Excerpt:

Augmented reality technologies are transforming the way we live, learn and interact with each other. They are creating limitless learning possibilities and are empowering learners with  the required know-how to get immersed in meaningful learning experiences. We have already reviewed several educational AR tools and apps and have also shared this collection of excellent TED talks on the educational potential of AR technologies. Drawing on these resources together with EdSurge list, we have prepared for you this updated collection of some of the best AR apps to use in your instruction. You may want to go through them and see which ones work  for you.

 

 

eXtended Reality (XR): How AR, VR, and MR are Extending Learning Opportunities | Resource list from educause

 

 

 

 

 

 

 

 

 

From DSC:
What can higher ed learn from this? Eventually, people will seek alternatives if what’s being offered isn’t acceptable to them anymore.


 

The Disappearing Doctor: How Mega-Mergers Are Changing the Business of Medical Care — from nytimes.com by Reed Ableson and Julie Creswell
Big corporations — giant retailers and health insurance companies — are teaming up to become your doctor.

Excerpt:

Is the doctor in?

In this new medical age of urgent care centers and retail clinics, that’s not a simple question. Nor does it have a simple answer, as primary care doctors become increasingly scarce.

“You call the doctor’s office to book an appointment,” said Matt Feit, a 45-year-old screenwriter in Los Angeles who visited an urgent care center eight times last year. “They’re only open Monday through Friday from these hours to those hours, and, generally, they’re not the hours I’m free or I have to take time off from my job.

“I can go just about anytime to urgent care,” he continued, “and my co-pay is exactly the same as if I went to my primary doctor.”

That’s one reason big players like CVS Health, the drugstore chain, and most recently Walmart, the giant retailer, are eyeing deals with Aetna and Humana, respectively, to use their stores to deliver medical care.

People are flocking to retail clinics and urgent care centers in strip malls or shopping centers, where simple health needs can usually be tended to by health professionals like nurse practitioners or physician assistants much more cheaply than in a doctor’s office. Some 12,000 are already scattered across the country, according to Merchant Medicine, a consulting firm.

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian