The 10 Most Exciting Digital Health Stories of 2017 — from medicalfuturist.com by Dr. Bertalan Mesko Gene-edited human embryo. Self-driving trucks. Practical quantum computers. 2017 has been an exciting year for science, technology – and digital health! It’s that time of the year again when it’s worth looking back at the past months; and list the inventions, methods and milestone events in healthcare to get a clearer picture what will shape medicine for the years to come.
Excerpt:
Medical chatbots and health assistants on the rise Chatbots, A.I. supported messaging apps or voice controlled bots are forecasted to replace simple messaging apps soon. In healthcare, they could help solve easily diagnosable health concerns or support patient management, e.g. general organizational issues. In the last months, several signs have pointed to the direction that a more widespread use is forthcoming.
A few years after John Boyer began teaching world geography at Virginia Tech, a survey revealed that 58 percent of college-aged Americans could not locate Japan on a map. Sixty-nine percent could not find the United Kingdom.
Boyer raced ahead undaunted. He loved the scope and implications of his subject. “The great thing about geography is . . . everything happens somewhere,” he explains. “Geography is the somewhere.”
Boyer is now a senior instructor and researcher at Virginia Tech. He took over World Regions, an entry-level geography class, while he was working on a master’s degree nearly 20 years ago. The class then had 50 students. Now the course is offered each semester and a whopping 3,000 students take it in any given school year.
What has made it so popular? Innovative pedagogy, for starters. Boyer uses a “flipped syllabus” in which students’ final grades are based on the points they’ve earned—not lost—throughout the semester. His legendary assignments range from reviewing films to tweeting on behalf of world leaders (more on that below). Mostly, Boyer himself has made the class a rite of passage for undergraduates, who typically find him funny, passionate, and consummately engaging. Boyer even created a comic alter ego called the Plaid Avenger, who has narrated textbooks and podcasts but is now largely retired—though Boyer still sports his famous plaid jackets and drives a plaid Scion.
Given the disparity in knowledge levels as well as the disparity in what they like to do in terms of work, whether that be watching international film or writing papers, I wanted to increase the flexibility of what the students could do to achieve a grade in this class.
Tell us about the Twitter World Leaders.
You can choose to be a true, real world leader. Of course, they’re fake accounts and we make sure everyone knows you’re the fake Donald Trump or the fake Angela Merkel of Germany. Once you take on that role, you will tweet as the world leader for the entire semester, and you have to tweet two to three times a day. And it’s not silly stuff. What is the chancellor of Germany working on right now? What other world leaders is Angela Merkel meeting with? What’s going on in Germany or the EEU?
We’ve all seen the commercials: “Alexa, is it going to rain today?” “Hey, Google, turn up the volume.” Consumers across the globe are finding increased utility in voice command technology in their homes. But dimming lights and reciting weather forecasts aren’t the only ways these devices are being put to work.
Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.
The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses to foster greater social connection.
Like many technologies, the emerging presence of voice command devices in classrooms and at universities is also raising concerns about student privacy and unnatural dependence on digital tools. Yet, many educators interviewed for this report said the rise of voice command technology in education is inevitable — and welcome.
…
“One example,” he said, “is how voice dictation helped a student with dysgraphia. Putting the pencil and paper in front of him, even typing on a keyboard, created difficulties for him. So, when he’s able to speak to the device and see his words on the screen, the connection becomes that much more real to him.”
The use of voice dictation has also been beneficial for students without disabilities, Miller added. Through voice recognition technology, students at Taos Academy Charter School are able to perceive communication from a completely new medium.
From DSC: After reviewing the article below, I wondered...if we need to interact with content to learn it…how might mixed reality allow for new ways of interacting with such content? This is especially intriguing when we interact with that content with others as well (i.e., social learning).
Perhaps Mixed Reality (MR) will bring forth a major expansion of how we look at “blended learning” and “hybrid learning.”
Changing How We Perceive The World One Industry At A Time Part of the reason mixed reality has garnered this momentum within such short span of time is that it promises to revolutionize how we perceive the world without necessarily altering our natural perspective. While VR/AR invites you into their somewhat complex worlds, mixed reality analyzes the surrounding real-world environment before projecting an enhanced and interactive overlay. It essentially “mixes” our reality with a digitally generated graphical information.
…
All this, however, pales in comparison to the impact of mixed reality on the storytelling process. While present technologies deliver content in a one-directional manner, from storyteller to audience, mixed reality allows for delivery of content, then interaction between content, creator and other users.This mechanism cultivates a fertile ground for increased contact between all participating entities, ergo fostering the creation of shared experiences. Mixed reality also reinvents the storytelling process. By merging the storyline with reality, viewers are presented with a wholesome experience that’s perpetually indistinguishable from real life.
Mixed reality is without a doubt going to play a major role in shaping our realities in the near future, not just because of its numerous use cases but also because it is the flag bearer of all virtualized technologies. It combines VR, AR and other relevant technologies to deliver a potent cocktail of digital excellence.
Higher education should now be on notice: Anyone with an Internet connection can now file a complaint or civil lawsuit, not just students with disabilities. And though Section 508 was previously unclear as to the expectations for accessibility, the updated requirements add specific web standards to adhere to — specifically, the Web Content Accessibility Guidelines (WCAG) 2.0 level AA developed by the World Wide Web Consortium (W3C).
…
Although WCAG 2.0 has been around since the early 2000s, it was developed by web content providers as a self-regulating tool to create uniformity for web standards around the globe. It was understood to be best practices but was not enforced by any regulating agency. The Section 508 refresh due in January 2018 changes this, as WCAG 2.0 level AA has been adopted as the standard of expected accessibility. Thus, all organizations subject to Section 508, including colleges and universities, that create and publish digital content — web pages, documents, images, videos, audio — must ensure that they know and understand these standards.
…
Reacting to the Section 508 Refresh In a few months, the revised Section 508 standards become enforceable law. As stated, this should not be considered a threat or burden but rather an opportunity for institutions to check their present level of commitment and adherence to accessibility. In order to prepare for the update in standards, a number of proactive steps can easily be taken:
Contract a third-party expert partner to review institutional accessibility policies and practices and craft a long-term plan to ensure compliance.
Review all public-facing websites and electronic documents to ensure compliance with WCAG 2.0 Level AA standards.
Develop and publish a policy to state the level of commitment and adherence to Section 508 and WCAG 2.0 Level AA.
Create an accessibility training plan for all individuals responsible for creating and publishing electronic content.
Ensure all ICT contracts, ROIs, and purchases include provisions for accessibility.
Inform students of their rights related to accessibility, as well as where to address concerns internally. Then support the students with timely resolutions.
As always, remember that the pursuit of accessibility demonstrates a spirit of inclusiveness that benefits everyone. Embracing the challenge to meet the needs of all students is a noble pursuit, but it’s not just an adoption of policy. It’s a creation of awareness, an awareness that fosters a healthy shift in culture. When this is the approach, the motivation to support all students drives every conversation, and the fear of legal repercussions becomes secondary. This should be the goal of every institution of learning.
Als0 see:
How to Make Accessibility Part of the Landscape— from insidehighered.com by Mark Lieberman A small institution in Vermont caters to students with disabilities by letting them choose the technology that suits their needs.
Excerpt:
Accessibility remains one of the key issues for digital learning professionals looking to catch up to the needs of the modern student. At last month’s Online Learning Consortium Accelerate conference, seemingly everyone in attendance hoped to come away with new insights into this thorny concern.
Landmark College in Vermont might offer some guidance. The private institution with approximately 450 students exclusively serves students with diagnosed learning disabilities, attention disorders or autism. Like all institutions, it’s still grappling with how best to serve students in the digital age, whether in the classroom or at a distance. Here’s a glimpse at the institution’s philosophy, courtesy of Manju Banerjee, Landmark’s vice president for educational research and innovation since 2011.
At North Carolina State University, Assistant Professor of Chemistry Denis Fourches uses technology to research the effectiveness of new drugs. He uses computer programs to model interactions between chemical compounds and biological targets to predict the effectiveness of the compound, narrowing the field of drug candidates for testing. Lately, he has been using a new program that allows the user to create 3D models of molecules for 3D printing, plus augmented and virtual reality applications.
RealityConvert converts molecular objects like proteins and drugs into high-quality 3D models. The models are generated in standard file formats that are compatible with most augmented and virtual reality programs, as well as 3D printers. The program is specifically designed for creating models of chemicals and small proteins.
Mozilla has launched its first ever augmented reality app for iOS. The company, best known for its Firefox browser, wants to create an avenue for developers to build augmented reality experiences using open web technologies, WebXR, and Apple’s ARKit framework.
This latest effort from Mozilla is called WebXR Viewer. It contains several sample AR programs, demonstrating its technology in the real world. One is a teapot, suspended in the air. Another contains holographic silhouettes, which you can place in your immediate vicinity. Should you be so inclined, you can also use it to view your own WebXR creations.
Airbnb announced today (Dec.11) that it’s experimenting with augmented- and virtual-reality technologies to enhance customers’ travel experiences.
The company showed off some simple prototype ideas in a blog post, detailing how VR could be used to explore apartments that customers may want to rent, from the comfort of their own homes. Hosts could scan apartments or houses to create 360-degree images that potential customers could view on smartphones or VR headsets.
It also envisioned an augmented-reality system where hosts could leave notes and instructions to their guests as they move through their apartment, especially if their house’s setup is unusual. AR signposts in the Airbnb app could help guide guests through anything confusing more efficiently than the instructions hosts often leave for their guests.
Now Object Theory has just released a new collaborative computing application for the HoloLens called Prism, which takes many of the functionalities they’ve been developing for those clients over the past couple of years, and offers them to users in a free Windows Store application.
Spending on augmented and virtual reality will nearly double in 2018, according to a new forecast from International Data Corp. (IDC), growing from $9.1 billion in 2017 to $17.8 billion next year. The market research company predicts that aggressive growth will continue throughout its forecast period, achieving an average 98.8 percent compound annual growth rate (CAGR) from 2017 to 2021.
Scope AR has launched Remote AR, an augmented reality video support solution for Microsoft’s HoloLens AR headsets.
The San Francisco company is launching its enterprise-class AR solution to enable cross-platform live support video calls.
Remote AR for Microsoft HoloLens brings AR support for field technicians, enabling them to perform tasks with better speed and accuracy. It does so by allowing an expert to get on a video call with a technician and then mark the spot on the screen where the technician has to do something, like turn a screwdriver. The technician is able to see where the expert is pointing by looking at the AR overlay on the video scene.
Ultimately, VR in education will revolutionize not only how people learn but how they interact with real-world applications of what they have been taught. Imagine medical students performing an operation or geography students really seeing where and what Kathmandu is. The world just opens up to a rich abundance of possibilities.
Like many higher education institutions, Michigan State University offers a wide array of online programs. But unlike most other online universities, some programs involve robots.
Here’s how it works: online and in-person students gather in the same classroom. Self-balancing robots mounted with computers roll around the room, displaying the face of one remote student. Each remote student streams in and controls one robot, which allows them to literally and figuratively take a seat at the table.
Professor Christine Greenhow, who teaches graduate level courses in MSU’s College of Education, first encountered these robots at an alumni event.
“I thought, ‘Oh I could use this technology in my classroom. I could use this to put visual and movement cues back into the environment,’” Greenhow said.
From DSC: In my work to bring remote learners into face-to-face classrooms at Calvin College, I also worked with some of the tools shown/mentioned in that article — such as the Telepresence Robot from Double Robotics and the unit from Swivl. I also introduced Blackboard Collaborate and Skype as other methods of bringing in remote students (hadn’t yet tried Zoom, but that’s another possibility).
As one looks at the image above, one can’t help but wonder what such a picture will look like 5-10 years from now? Will it picture folks wearing VR-based headsets at their respective locations? Or perhaps some setups will feature the following types of tools within smaller “learning hubs” (which could also include one’s local Starbucks, Apple Store, etc.)?
Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public. The diversity of opinions and debates gathered from news articles this year illustrates just how broadly AI is being investigated, studied, and applied. However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field.
Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI.
Created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), the AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data. This is the inaugural annual report of the AI Index, and in this report we look at activity and progress in Artificial Intelligence through a range of perspectives. We aggregate data that exists freely on the web, contribute original data, and extract new metrics from combinations of data series.
All of the data used to generate this report will be openly available on the AI Index website at aiindex.org. Providing data, however, is just the beginning. To become truly useful, the AI Index needs support from a larger community. Ultimately, this report is a call for participation. You have the ability to provide data, analyze collected data, and make a wish list of what data you think needs to be tracked. Whether you have answers or questions to provide, we hope this report inspires you to reach out to the AI Index and become part of the effort to ground the conversation about AI.
Similarly, messaging and social media are the killer apps of smartphones. Our need to connect with other people follows us, no matter where technology takes us. New technology succeeds when it makes what we are already doing better, cheaper, and faster. It naturally follows that Telepresence should likewise be one of the killer apps for both AR and VR. A video of Microsoft Research’s 2016 Holoportation experiment suggests Microsoft must have been working on this internally for some time, maybe even before the launch of the HoloLens itself.
Telepresence, meaning to be electronically present elsewhere, is not a new idea. As a result, the term describes a broad range of approaches to virtual presence. It breaks down into six main types:
Our need to connect with other people follows us, no matter where technology takes us.
The same techniques that generate images of smoke, clouds and fantastic beasts in movies can render neurons and brain structures in fine-grained detail.
Two projects presented yesterday at the 2017 Society for Neuroscience annual meeting in Washington, D.C., gave attendees a sampling of what these powerful technologies can do.
“These are the same rendering techniques that are used to make graphics for ‘Harry Potter’ movies,” says Tyler Ard, a neuroscientist in Arthur Toga’s lab at the University of Southern California in Los Angeles. Ard presented the results of applying these techniques to magnetic resonance imaging (MRI) scans.
The methods can turn massive amounts of data into images, making them ideally suited to generate brain scans. Ard and his colleagues develop code that enables them to easily enter data into the software. They plan to make the code freely available to other researchers.
After several cycles of development, it became clear that getting our process into VR as early as possible was essential. This was difficult to do within the landscape of VR tooling. So, at the beginning of 2017, we began developing features for early-stage VR prototyping in a tool named “Expo.”
… Start Prototyping in VR Now We developed Expo because the tools for collaborative prototyping did not exist at the start of this year. Since then, the landscape has dramatically improved and there are many tools providing prototyping workflows with no requirement to do your own development:
It’s no secret that one of the biggest issues holding back virtual and augmented reality is the lack of content.
Even as bigger studios and companies are sinking more and more money into VR and AR development, it’s still difficult for smaller, independent, developers to get started. A big part of the problem is that AR and VR apps require developers to create a ton of 3D objects, often an overwhelming and time-consuming process.
Google is hoping to fix that, though, with its new service called Poly, an online library of 3D objects developers can use in their own apps.
The model is a bit like Flickr, but for VR and AR developers rather than photographers. Anyone can upload their own 3D creations to the service and make them available to others via a Creative Commons license, and any developer can search and download objects for their own apps and games.
The definition of business-to-business (B2B) is rapidly changing. While B2B used to refer to commercial transactions between businesses, the power of augmented reality (AR) is transforming B2B by changing how businesses interact with one another. B2B is becoming more than business-to-business marketing; even more than B2B sales. AR is expanding businesses’ ability to connect with each other across multiple channels, and it is increasing the reasons they do so.
In this article, we will examine how augmented reality can help businesses promote their products and services to other enterprises. More exciting for some, we will also explore new ways businesses can use AR to connect with corporate customers, vendors, and partners.
…
Augmented Reality B2B Product Support Companies not only buy products from other companies; they also need customer service for many of the products they purchase. From poorly-written user manuals to poorly-written online self-help portals, product support hasn’t changed very much. But augmented reality is changing all that.
By layering how-to information over a product image as viewed on a mobile device or AR viewer, augmented reality can display interactive text and images that instruct the user how to setup, configure, troubleshoot and repair a wide variety of products.
AR-based customer support has barely broken ground in the consumer market, but opportunities are also great for developers who can target B2B industries with AR solutions.
Using AR apps, a carpet company can show an office manager what various carpet designs would look like installed in their office. A manufacturing engineer can see how a new production line would fit on the production floor. A physician can see how powerful AR medical applications can help her better diagnose her patients. And an architect can show a client how a new room addition will look from the outside when completed.
Creative voice tech — from jwtintelligence.com by Ella Britton New programs from Google and the BBC use voice to steer storytelling with digital assistants.
Exceprt:
BBC Radio’s newest program, The Inspection Chamber, uses smart home devices to allow listeners to interact with and control the plot. Amid a rise in choose-your-own-adventure style programming, The Inspection Chamber opens up creative new possibilities for brands hoping to make use of voice assistants.
The Inspection Chamber tells the story of an alien stranded on earth, who is being interrogated by scientists and an AI robot called Dave. In this interactive drama, Amazon Echo and Google Home users play the part of the alien, answering questions and interacting with other characters to determine the story’s course. The experience takes around 20 minutes, with questions like “Cruel or Kind?” and “Do you like puzzles?” that help the scientists categorize a user.
…
“Voice is going to be a key way we interact with media, search for content, and find what we want,” said BBC director general Tony Hall. As describedin the Innovation Group’s Speak Easy report, the opportunities for brands to connect with consumers via smart speakers are substantial, particularly when it comes to education and entertainment. Brands can also harness these opportunities by creating entertaining and engaging content that consumers can interact with, creating what feels like a two-way dialogue.
From DSC: More and more, our voices will drive the way we interact with computing devices/applications. This item was an especially interesting item to me, as it involves the use of our voice — at home — to steer the storytelling that’s taking place. Talk about a new form of interactivity!
As we increase the usage of chatbots in our personal lives, we will expect to use them in the workplace to assist us with things like finding new jobs, answering frequently asked HR related questions or even receiving coaching and mentoring. Chatbots digitize HR processes and enable employees to access HR solutions from anywhere. Using artificial intelligence in HR will create a more seamless employee experience, one that is nimbler and more user driven.
From DSC: I am honored to be currently serving on the 2018 Advisory Council for the Next Generation Learning Spaces Conference with a great group of people. Missing — at least from my perspective — from the image below is Kristen Tadrous, Senior Program Director with the Corporate Learning Network. Kristen has done a great job these last few years planning and running this conference.
NOTE:
The above graphic reflects a recent change for me. I am still an Adjunct Faculty Member
at Calvin College, but I am no longer a Senior Instructional Designer there.
My brand is centered around being an Instructional Technologist.
This national conference will be held in Los Angeles, CA on February 26-28, 2018. It is designed to help institutions of higher education develop highly-innovative cultures — something that’s needed in many institutions of traditional higher education right now.
I have attended the first 3 conferences and I moderated a panel at the most recent conference out in San Diego back in February/March of this year. I just want to say that this is a great conference and I encourage you to bring a group of people to it from your organization! I say a group of people because a group of 5 of us (from a variety of departments) went one year and the result of attending the NGLS Conference was a brand new Sandbox Classroom — an active-learning based, highly-collaborative learning space where faculty members can experiment with new pedagogies as well as with new technologies. The conference helped us discuss things as a diverse group, think out load, come up with some innovative ideas, and then build the momentum to move forward with some of those key ideas.
Per Kristen Tadrous, here’s why you want to check out USC:
A true leader in innovation: USC made it to the Top 20 of Reuter’s 100 Most Innovative Universities in 2017!
Detailed guided tour of leading spaces led by the Information Technology Services Learning Environments team
Benchmark your own learning environments by getting a ‘behind the scenes’ look at their state-of-the-art spaces
There are only 30 spots available for the site tour
Building Spaces to Inspire a Culture of Innovation — a core theme at the 4th Next Generation Learning Spaces summit, taking place this February 26-28 in Los Angeles. An invaluable opportunity to meet and hear from like-minded peers in higher education, and continue your path toward lifelong learning. #ngls2018 http://bit.ly/2yNkMLL