Three AI and machine learning predictions for 2019 — from forbes.com by Daniel Newman

Excerpt:

What could we potentially see next year? New and innovative uses for machine learning? Further evolution of human and machine interaction? The rise of AI assistants? Let’s dig deeper into AI and machine learning predictions for the coming months.

 

2019 will be a year of development for the AI assistant, showing us just how powerful and useful these tools are. It will be in more places than your home and your pocket too. Companies such as Kia and Hyundai are planning to include AI assistants in their vehicles starting in 2019. Sign me up for a new car! I’m sure that Google, Apple, and Amazon will continue to make advancements to their AI assistants making our lives even easier.

 

 

DeepMind AI matches health experts at spotting eye diseases — from endgadget.com by Nick Summers

Excerpt:

DeepMind has successfully developed a system that can analyze retinal scans and spot symptoms of sight-threatening eye diseases. Today, the AI division — owned by Google’s parent company Alphabet — published “early results” of a research project with the UK’s Moorfields Eye Hospital. They show that the company’s algorithms can quickly examine optical coherence tomography (OCT) scans and make diagnoses with the same accuracy as human clinicians. In addition, the system can show its workings, allowing eye care professionals to scrutinize the final assessment.

 

 

Microsoft and Amazon launch Alexa-Cortana public preview for Echo speakers and Windows 10 PCs — from venturebeat.com by Khari Johnson

Excerpt:

Microsoft and Amazon will bring Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S. [on 8/15/18]. As part of a partnership between the Seattle-area tech giants, you can say “Hey Cortana, open Alexa” to Windows 10 PCs and “Alexa, open Cortana” to a range of Echo smart speakers.

The public preview bringing the most popular AI assistant on PCs together with the smart speaker with the largest U.S. market share will be available to most people today but will be rolled out to all users in the country over the course of the next week, a Microsoft spokesperson told VentureBeat in an email.

Each of the assistants brings unique features to the table. Cortana, for example, can schedule a meeting with Outlook, create location-based reminders, or draw on LinkedIn to tell you about people in your next meeting. And Alexa has more than 40,000 voice apps or skills made to tackle a broad range of use cases.

 

 

What Alexa can and cannot do on a PC — from venturebeat.com by Khari Johnson

Excerpt:

Whatever happened to the days of Alexa just being known as a black cylindrical speaker? Since the introduction of the first Echo in fall 2014, Amazon’s AI assistant has been embedded in a number of places, including car infotainment systems, Alexa smartphone apps, wireless headphones, Echo Show and Fire tablets, Fire TV Cube for TV control, the Echo Look with an AI-powered fashion assistant, and, in recent weeks, personal computers.

Select computers from HP, Acer, and others now make Alexa available to work seamlessly alongside Microsoft’s Cortana well ahead of the Alexa-Cortana partnership for Echo speakers and Windows 10 devices, a project that still has no launch date.

 

 

Can we design online learning platforms that feel more intimate than massive? — from edsurge.com by Amy Ahearn

Excerpt:

This presents a challenge and an opportunity: How can we design online learning environments that achieve scale and intimacy? How do we make digital platforms feel as inviting as well-designed physical classrooms?

The answer may be that we need to balance massiveness with miniaturization. If the first wave of MOOCs was about granting unprecedented numbers of students access to high-quality teaching and learning materials, Wave 2 needs to focus on creating a sense of intimacy within that massiveness.

We need to be building platforms that look less like a cavernous stadium and more like a honeycomb. This means giving people small chambers of engagement where they can interact with a smaller, more manageable and yet still diverse groups. We can’t meaningfully listen to the deafening roar of the internet. But we can learn from a collection of people with perspectives different than ours.

 

 

What will it take to get MOOC platforms to begin to offer learning spaces that feel more inviting and intimate? Perhaps there’s a new role that needs to emerge in the online learning ecosystem: a “learning architect” who sits between the engineers and the instructional designers.

 

 

 

 

 

 

Computing in the Camera — from blog.torch3d.com by Paul Reynolds
Mobile AR, with its ubiquitous camera, is set to transform what and how human experience designers create.

One of the points Allison [Woods, CEO, Camera IQ] made repeatedly on that call (and in this wonderful blog post of the same time period) was that the camera is going to be at the center of computing going forward, an indispensable element. Spatial computing could not exist without it. Simple, obvious, straightforward, but not earth shaking. We all heard what she had to say, but I don’t think any of us really understood just how profound or prophetic that statement turned out to be.

 

“[T]he camera will bring the internet and the real world into a single time and space.”

— Allison Woods, CEO, Camera IQ

 

 

The Camera As Platform — from shift.newco.co by Allison Wood
When the operating system moves to the viewfinder, the world will literally change

“Every day two billion people carry around an optical data input device — the smartphone Camera — connected to supercomputers and informed by massive amounts of data that can have nearly limitless context, position, recognition and direction to accomplish tasks.”

– Jacob Mullins, Shasta Ventures

 

 

 

The State Of The ARt At AWE 18 — from forbes.com by Charlie Fink

Excerpt:

The bigger story, however, is how fast the enterprise segment is growing as applications as straightforward as schematics on a head-mounted monocular microdisplay are transforming manufacturing, assembly, and warehousing. Use cases abounded.

After traveling the country and most recently to Europe, I’ve now experienced almost every major VR/AR/MR/XR related conference out there. AWE’s exhibit area was by far the largest display of VR and AR companies to date (with the exception of CES).

 

AR is being used to identify features and parts within cars

 

 

 

 

Student Learning and Virtual Reality: The Embodied Experience — from er.educause.edu by Jaime Hannans, Jill Leafstedt and Talya Drescher

Excerpts:

Specifically, we explored the potential for how virtual reality can help create a more empathetic nurse, which, we hypothesize, will lead to increased development of nursing students’ knowledge, skills, and attitudes. We aim to integrate these virtual experiences into early program coursework, with the intent of changing nursing behavior by providing a deeper understanding of the patient’s perspective during clinical interactions.

In addition to these compelling student reflections and the nearly immediate change in reporting practice, survey findings show that students unanimously felt that this type of patient-perspective VR experience should be integrated and become a staple of the nursing curriculum. Seeing, hearing, and feeling these moments results in significant and memorable learning experiences compared to traditional classroom learning alone. The potential that this type of immersive experience can have in the field of nursing and beyond is only limited by the imagination and creation of other virtual experiences to explore. We look forward to continued exploration of the impact of VR on student learning and to establishing ongoing partnerships with developers.

 

Also see:

 

 

 

‘You can see what you can’t imagine’: Local students, professors helping make virtual reality a reality — from omaha.com and Creighton University

Excerpt:

“You can see what you can’t imagine,” said Aaron Herridge, a graduate student in Creighton’s medical physics master’s program and a RaD Lab intern who is helping develop the lab’s virtual reality program. “It’s an otherworldly experience,” Herridge says. “But that’s the great plus of virtual reality. It can take you places that you couldn’t possibly go in real life. And in physics, we always say that if you can’t visualize it, you can’t do the math. It’s going to be a huge educational leap.”

 

“We’re always looking for ways to help students get the real feeling for astronomy,” Gabel said. “Visualizing space from another planet, like Mars, or from Earth’s moon, is a unique experience that goes beyond pencil and paper or a two-dimensional photograph in a textbook.

 

 

BAE created a guided step-by-step training solution for HoloLens to teach workers how to assemble a green energy bus battery.

From DSC:
How long before items that need some assembling come with such experiences/training-related resources?

 

 

 

VR and AR: The Ethical Challenges Ahead — from er.educause.edu by Emory Craig and Maya Georgieva
Immersive technologies will raise new ethical challenges, from issues of access, privacy, consent, and harassment to future scenarios we are only now beginning to imagine.

Excerpt:

As immersive technologies become ever more realistic with graphics, haptic feedback, and social interactions that closely align with our natural experience, we foresee the ethical debates intensifying. What happens when the boundaries between the virtual and physical world are blurred? Will VR be a tool for escapism, violence, and propaganda? Or will it be used for social good, to foster empathy, and as a powerful new medium for learning?

 

 

Google Researchers Have Developed an Augmented Reality Microscope for Detecting Cancer — from next.reality.news by Tommy Palladino

Excerpt:

Augmented reality might not be able to cure cancer (yet), but when combined with a machine learning algorithm, it can help doctors diagnose the disease. Researchers at Google have developed an augmented reality microscope (ARM) that takes real-time data from a neural network trained to detect cancerous cells and displays it in the field of view of the pathologist viewing the images.

 

 

Sherwin-Williams Uses Augmented Reality to Take the Guesswork Out of Paint Color Selection — from next.reality.news by Tommy Palladino

 

 

 

 

 

 

Click on the image to get a larger image in a PDF file format.

 


From DSC:
So regardless of what was being displayed up on any given screen at the time, once a learner was invited to use their devices to share information, a graphical layer would appear on the learner’s mobile device — as well as up on the image of the screens (but the actual images being projected on the screens would be shown in the background in a muted/pulled back/25% opacity layer so the code would “pop” visually-speaking) — letting him or her know what code to enter in order to wirelessly share their content up to a particular screen. This could be extra helpful when you have multiple screens in a room.

For folks at Microsoft: I could have said Mixed Reality here as well.


 

#ActiveLearning #AR #MR #IoT #AV #EdTech #M2M #MobileApps
#Sensors #Crestron #Extron #Projection #Epson #SharingContent #Wireless

 

 

From DSC:
This application looks to be very well done and thought out! Wow!

Check out the video entitled “Interactive Ink – Enables digital handwriting — and you may also wonder whether this could be a great medium/method of having to “write things down” for better information processing in our minds, while also producing digital work for easier distribution and sharing!

Wow!  Talk about solid user experience design and interface design! Nicely done.

 

 

Below is an excerpt of the information from Bella Pietsch from anthonyBarnum Public Relations

Imagine a world where users interact with their digital devices seamlessly, and don’t suffer from lag and delayed response time. I work with MyScript, a company whose Interactive Ink tech creates that world of seamless handwritten interactivity by combining the flexibility of pen and paper with the power and productivity of digital processing.

According to a recent forecast, the global handwriting recognition market is valued at a trillion-plus dollars and is expected to grow at an almost 16 percent compound annual growth rate by 2025. To add additional context, the new affordable iPad with stylus support was just released, allowing users to work with the $99 Apple Pencil, which was previously only supported by the iPad Pro.

Check out the demo of Interactive Ink using an Apple Pencil, Microsoft Surface Pen, Samsung S Pen or Google Pixelbook Pen here.

Interactive Ink’s proficiencies are the future of writing and equating. Developed by MyScript Labs, Interactive Ink is a form of digital ink technology which allows ink editing via simple gestures and providing device reflow flexibility. Interactive Ink relies on real-time predictive handwriting recognition, driven by artificial intelligence and neural network architectures.

 

 

 

 

Design Thinking: A Quick Overview — from interaction-design.org by Rikke Dam and Teo Siang

Excerpt:

To begin, let’s have a quick overview of the fundamental principles behind Design Thinking:

  • Design Thinking starts with empathy, a deep human focus, in order to gain insights which may reveal new and unexplored ways of seeing, and courses of action to follow in bringing about preferred situations for business and society.
  • It involves reframing the perceived problem or challenge at hand, and gaining perspectives, which allow a more holistic look at the path towards these preferred situations.
  • It encourages collaborative, multi-disciplinary teamwork to leverage the skills, personalities and thinking styles of many in order to solve multifaceted problems.
  • It initially employs divergent styles of thinking to explore as many possibilities, deferring judgment and creating an open ideations space to allow for the maximum number of ideas and points of view to surface.
  • It later employs convergent styles of thinking to isolate potential solution streams, combining and refining insights and more mature ideas, which pave a path forward.
  • It engages in early exploration of selected ideas, rapidly modelling potential solutions to encourage learning while doing, and allow for gaining additional insight into the viability of solutions before too much time or money has been spent
  • Tests the prototypes which survive the processes further to remove any potential issues.
  • Iterates through the various stages, revisiting empathetic frames of mind and then redefining the challenge as new knowledge and insight is gained along the way.
  • It starts off chaotic and cloudy steamrolling towards points of clarity until a desirable, feasible and viable solution emerges.

 

 

From DSC:
This post includes information about popular design thinking frameworks. I think it’s a helpful posting for those who have heard about design thinking but want to know more about it.

 

 

What is Design Thinking?
Design thinking is an iterative process in which we seek to understand the user, challenge assumptions we might have, and redefine problems in an attempt to identify alternative strategies and solutions that might not be instantly apparent with our initial level of understanding. As such, design thinking is most useful in tackling problems that are ill-defined or unknown.

Design thinking is extremely useful in tackling ill-defined or unknown problems—it reframes the problem in human-centric ways, allows the creation of many ideas in brainstorming sessions, and lets us adopt a hands-on approach in prototyping and testing. Design thinking also involves on-going experimentation: sketching, prototyping, testing, and trying out concepts and ideas. It involves five phases: Empathize, Define, Ideate, Prototype, and Test. The phases allow us to gain a deep understanding of users, critically examine the assumptions about the problem and define a concrete problem statement, generate ideas for tackling the problem, and then create prototypes for the ideas in order to test their effectiveness.

Design thinking is not about graphic design but rather about solving problems through the use of design. It is a critical skill for allprofessionals, not only designers. Understanding how to approach problems and apply design thinking enables everyone to maximize our contributions in the work environment and create incredible, memorable products for users.

 

 

 

 

Embracing Digital Tools of the Millennial Trade. — from virtuallyinspired.org

Excerpt:

Thus, millennials are well-acquainted with – if not highly dependent on – the digital tools they use in their personal and professional lives. Tools that empower them to connect and collaborate in a way that is immediate and efficient, interactive and self-directed. Which is why they expect technology-enhanced education to replicate this user experience in the virtual classroom. And when their expectations fall short or go unmet altogether, millennials are more likely to go in search of other alternatives.

 

 

From DSC:
There are several solid tools mentioned in this article, and I always appreciate the high-level of innovation arising from Susan Aldridge, Marci Powell, and the folks at virtuallyinspired.org.

After reading the article, the key considerations that come to my mind involve the topics of usability and advocating for the students’ perspective. That is, we need to approach things from the student’s/learner’s standpoint — from a usability and user experience standpoint. For example, a seamless/single sign-on for each of these tools would be a requirement for implementing them. Otherwise, learners would have to be constantly logging into a variety of systems and services. Not only is that process time consuming, but a learner would need to keep track of additional passwords — and who doesn’t have enough of those to keep track of these days (I realize there are tools for that, but even those tools require additional time to investigate, setup, and maintain).

So plug-ins for the various CMSs/LMSs are needed that allow for a nice plug-and-play situation here.

 

 

Virtual reality technology enters a Chinese courtroom — from supchina.com by Jiayun Feng

Excerpt:

The introduction of VR technology is part of a “courtroom evidence visualization system” developed by the local court. The system also includes a newly developed computer program that allows lawyers to present evidence with higher quality and efficiency, which will replace a traditional PowerPoint slideshow.

It is reported that the system will soon be implemented in courtrooms across the city of Beijing.

 

 

 

Watch Waymo’s Virtual-Reality View of the World — from spectrum.ieee.org by Philip Ross

From DSC:
This is mind blowing. Now I see why Nvidia’s products/services are so valuable.

 

 

Along these same lines, also see this clip and/or this article entitled, This is why AR and Autonomous Driving are the Future of Cars:

 

 

 

The Legal Hazards of Virtual Reality and Augmented Reality Apps — from spectrum.ieee.org by Tam Harbert
Liability and intellectual property issues are just two areas developers need to know about

Excerpt:

As virtual- and augmented-reality technologies mature, legal questions are emerging that could trip up VR and AR developers. One of the first lawyers to explore these questions is Robyn Chatwood, of the international law firm Dentons. “VR and AR are areas where the law is just not keeping up with [technology] developments,” she says. IEEE Spectrum contributing editor Tam Harbert talked with Chatwood about the legal challenges.

 

 

 

This VR Tool Could Make Kids A Lot Less Scared Of Medical Procedures — from fastcompany.com by Daniel Terdiman
The new app creates a personalized, explorable 3D model of a kid’s own body that makes it much easier for them to understand what’s going on inside.

Excerpt:

A new virtual reality app that’s designed to help kids suffering from conditions like Crohn’s disease understand their maladies immerses those children in a cartoon-like virtual reality tour through their body.

Called HealthVoyager, the tool, a collaboration between Boston Children’s Hospital and the health-tech company Klick Health, is being launched today at an event featuring former First Lady Michelle Obama.

A lot of kids are confused by doctors’ intricate explanations of complex procedures like a colonoscopy, and they, and their families, can feel much more engaged, and satisfied, if they really understand what’s going on. But that’s been hard to do in a way that really works and doesn’t get bogged down with a lot of meaningless jargon.

 

 

Augmented Reality in Education — from invisible.toys

 

Star Chart -- AR and astronomy

 

 

The state of virtual reality — from furthermore.equinox.com by Rachael Schultz
How the latest advancements are optimizing performance, recovery, and injury prevention

Excerpt:

Virtual reality is increasingly used to enhance everything from museum exhibits to fitness classes. Elite athletes are using VR goggles to refine their skills, sports rehabilitation clinics are incorporating it into recovery regimes, and others are using it to improve focus and memory.

Here, some of the most exciting things happening with virtual reality, as well as what’s to come.

 

 

Augmented Reality takes 3-D printing to next level — from rtoz.org

Excerpt:

Cornell researchers are taking 3-D printing and 3-D modeling to a new level by using augmented reality (AR) to allow designers to design in physical space while a robotic arm rapidly prints the work. To use the Robotic Modeling Assistant (RoMA), a designer wears an AR headset with hand controllers. As soon as a design feature is completed, the robotic arm prints the new feature.

 

 

 

From DSC:
How might the types of technologies being developed and used by Kazendi’s Holomeeting be used for building/enhancing learning spaces?

 

 

 

 

AR and Blockchain: A Match Made in The AR Cloud — from medium.com by Ori Inbar

Excerpt:

In my introduction to the AR Cloud I argued that in order to reach mass adoption, AR experiences need to persist in the real world across space, time, and devices.

To achieve that, we will need a persistent realtime spatial map of the world that enables sharing and collaboration of AR Experiences among many users.

And according to AR industry insiders, it’s poised to become:

“the most important software infrastructure in computing”

aka: The AR Cloud.

 

 

 

 

Scientists Are Turning Alexa into an Automated Lab Helper — from technologyreview.com by Jamie Condliffe
Amazon’s voice-activated assistant follows a rich tradition of researchers using consumer tech in unintended ways to further their work.

Excerpt:

Alexa, what’s the next step in my titration?

Probably not the first question you ask your smart assistant in the morning, but potentially the kind of query that scientists may soon be leveling at Amazon’s AI helper. Chemical & Engineering News reports that software developer James Rhodes—whose wife, DeLacy Rhodes, is a microbiologist—has created a skill for Alexa called Helix that lends a helping hand around the laboratory.

It makes sense. While most people might ask Alexa to check the news headlines, play music, or set a timer because our hands are a mess from cooking, scientists could look up melting points, pose simple calculations, or ask for an experimental procedure to be read aloud while their hands are gloved and in use.

For now, Helix is still a proof-of-concept. But you can sign up to try an early working version, and Rhodes has plans to extend its abilities…

 

Also see:

Helix

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian