The 10 Most Exciting Digital Health Stories of 2017 — from medicalfuturist.com by Dr. Bertalan Mesko Gene-edited human embryo. Self-driving trucks. Practical quantum computers. 2017 has been an exciting year for science, technology – and digital health! It’s that time of the year again when it’s worth looking back at the past months; and list the inventions, methods and milestone events in healthcare to get a clearer picture what will shape medicine for the years to come.
Excerpt:
Medical chatbots and health assistants on the rise Chatbots, A.I. supported messaging apps or voice controlled bots are forecasted to replace simple messaging apps soon. In healthcare, they could help solve easily diagnosable health concerns or support patient management, e.g. general organizational issues. In the last months, several signs have pointed to the direction that a more widespread use is forthcoming.
The educational technology sector grew substantially in 2017 and all signs point to even greater growth in 2018. Over the past year, the sector was buoyed by several key factors, including a growing recognition that as big data restructures work at an unprecedented pace, there is an urgent need to rethink how education is delivered. In fact, there is now growing evidence that colleges and universities, especially if they continue to operate as they have in the past, will simply not be able to produce the workers needed to fill tomorrow’s jobs. Ed tech, with its capacity to make education more affordable, flexible, and relevant, is increasingly being embraced as the answer to the Fourth Industrial Revolution’s growing talent pipeline challenges.
…
K-12 virtual schools will become a preferred choice
Voice-activation will transform the Learning Management System (LMS) sector
Data will drive learning
Higher ed will increase online course and program offerings
No one can predict how the future will shake out, but we can make some educated guesses.
Global design and strategy firm frog has shared with Business Insider its forecasts for the technologies that will define the upcoming year. Last year, the firm correctly predicted that buildings would harness the power of nature and that businesses would continue using artificially-intelligent bots to run efficiently.
Get ready to step into the future.
Artificial intelligence will inspire how products are designed
Other companies will join Google in the ‘Algorithm Hall of Fame’
Virtual and augmented reality will become communal experiences
Democracy will cozy up to the blockchain
Augmented reality will invite questions about intellectual property
Consumer tech will feel even friendlier
Tech will become inclusive for all
Anonymous data will make life smarter but still private
Ultra-tiny robots will replace medicine for certain patients
The way we get around will fundamentally transform
Businesses will use data and machine learning to cater to customers
Social media will take on more corporate responsibility
This three-part lab can be experienced all at once or separately. At the beginning of each part, Beatriz’s brain acts as an omniscient narrator, helping learners understand how changes to the brain affect daily life and interactions.
Pre and post assessments, along with a facilitation guide, allow learners and instructors to see progression towards outcomes that are addressed through the story and content in the three parts, including:
1) increased knowledge of Alzheimer’s disease and the brain
2) enhanced confidence to care for people with Alzheimer’s disease
3) improvement in care practice
Why a lab about Alzheimer’s Disease?
The Beatriz Lab is very important to us at Embodied Labs. It is the experience that inspired the start of our company. We believe VR is more than a way to evoke feelings of empathy; rather, it is a powerful behavior change tool. By taking the perspective of Beatriz, healthcare professionals and trainees are empowered to better care for people with Alzheimer’s disease, leading to more effective care practices and better quality of life. Through embodying Beatriz, you will gain insight into life with Alzheimer’s and be able to better connect with and care for your loved ones, patients, clients, or others in this communities who live with the disease every day. In our embodied VR experience, we hope to portray both the difficult and joyful moments — the disease surely is a mix of both.
As part of the experience, you will take a 360 degree trip into Beatriz’s brain,
and visit a neuron “forest” that is being affected by amyloid beta plaques and tau proteins.
From DSC: I love the work that Carrie Shaw and @embodiedLabs are doing! Thanks Carrie & Company!
As VR continues to grow and improve, the experiences will feel more real. But for now, here are the best business conference applications in virtual reality.
A sign of how Apple is supporting VR in parts of its ecosystem, Final Cut Pro X (along with Motion and Compressor), now has a complete toolset that lets you import, edit, and deliver 360° video in both monoscopic and stereoscopic formats.
Final Cut Pro X 10.4 comes with a handful of slick new features that we tested, such as advanced color grading and support for High Dynamic Range (HDR) workflows. All useful features for creators, not just VR editors, especially since Final Cut Pro is used so heavily in industries like video editing and production. But up until today, VR post-production options have been minimal, with no support from major VR headsets. We’ve had options with Adobe Premiere plus plugins, but not everyone wants to be pigeon-holed into a single software option. And Final Cut Pro X runs butter smooth on the new iMac, so there’s that.
Now with the ability to create immersive 360° films right in Final Cut Pro, an entirely new group of creators have the ability to dive into the world of 360 VR video. Its simple and intuitive, something we expect from an Apple product. The 360 VR toolset just works.
HWAM’s first exhibition is a unique collection of Star Wars production pieces, including the very first drawings made for the film franchise and never-before-seen production art from the original trilogy by Lucasfilm alum Joe Johnston, Ralph McQuarrie, Phil Tippett, Drew Struzan, Colin Cantwell, and more.
Will virtual reality help you learn a language more quickly? Or will it simply replace your memory?
VR is the ultimate medium for delivering what is known as “experiential learning.” This education theory is based on the idea that we learn and remember things much better when doing something ourselves than by merely watching someone else do it or being told about it.
The immersive nature of VR means users remember content they interact with in virtual scenarios much more vividly than with any other medium. (According to experiments carried out by professor Ann Schlosser at the University of Washington, VR even has the capacity to prompt the development of false memories.)
From DSC: After reviewing the article below, I wondered...if we need to interact with content to learn it…how might mixed reality allow for new ways of interacting with such content? This is especially intriguing when we interact with that content with others as well (i.e., social learning).
Perhaps Mixed Reality (MR) will bring forth a major expansion of how we look at “blended learning” and “hybrid learning.”
Changing How We Perceive The World One Industry At A Time Part of the reason mixed reality has garnered this momentum within such short span of time is that it promises to revolutionize how we perceive the world without necessarily altering our natural perspective. While VR/AR invites you into their somewhat complex worlds, mixed reality analyzes the surrounding real-world environment before projecting an enhanced and interactive overlay. It essentially “mixes” our reality with a digitally generated graphical information.
…
All this, however, pales in comparison to the impact of mixed reality on the storytelling process. While present technologies deliver content in a one-directional manner, from storyteller to audience, mixed reality allows for delivery of content, then interaction between content, creator and other users.This mechanism cultivates a fertile ground for increased contact between all participating entities, ergo fostering the creation of shared experiences. Mixed reality also reinvents the storytelling process. By merging the storyline with reality, viewers are presented with a wholesome experience that’s perpetually indistinguishable from real life.
Mixed reality is without a doubt going to play a major role in shaping our realities in the near future, not just because of its numerous use cases but also because it is the flag bearer of all virtualized technologies. It combines VR, AR and other relevant technologies to deliver a potent cocktail of digital excellence.
At North Carolina State University, Assistant Professor of Chemistry Denis Fourches uses technology to research the effectiveness of new drugs. He uses computer programs to model interactions between chemical compounds and biological targets to predict the effectiveness of the compound, narrowing the field of drug candidates for testing. Lately, he has been using a new program that allows the user to create 3D models of molecules for 3D printing, plus augmented and virtual reality applications.
RealityConvert converts molecular objects like proteins and drugs into high-quality 3D models. The models are generated in standard file formats that are compatible with most augmented and virtual reality programs, as well as 3D printers. The program is specifically designed for creating models of chemicals and small proteins.
Mozilla has launched its first ever augmented reality app for iOS. The company, best known for its Firefox browser, wants to create an avenue for developers to build augmented reality experiences using open web technologies, WebXR, and Apple’s ARKit framework.
This latest effort from Mozilla is called WebXR Viewer. It contains several sample AR programs, demonstrating its technology in the real world. One is a teapot, suspended in the air. Another contains holographic silhouettes, which you can place in your immediate vicinity. Should you be so inclined, you can also use it to view your own WebXR creations.
Airbnb announced today (Dec.11) that it’s experimenting with augmented- and virtual-reality technologies to enhance customers’ travel experiences.
The company showed off some simple prototype ideas in a blog post, detailing how VR could be used to explore apartments that customers may want to rent, from the comfort of their own homes. Hosts could scan apartments or houses to create 360-degree images that potential customers could view on smartphones or VR headsets.
It also envisioned an augmented-reality system where hosts could leave notes and instructions to their guests as they move through their apartment, especially if their house’s setup is unusual. AR signposts in the Airbnb app could help guide guests through anything confusing more efficiently than the instructions hosts often leave for their guests.
Now Object Theory has just released a new collaborative computing application for the HoloLens called Prism, which takes many of the functionalities they’ve been developing for those clients over the past couple of years, and offers them to users in a free Windows Store application.
Spending on augmented and virtual reality will nearly double in 2018, according to a new forecast from International Data Corp. (IDC), growing from $9.1 billion in 2017 to $17.8 billion next year. The market research company predicts that aggressive growth will continue throughout its forecast period, achieving an average 98.8 percent compound annual growth rate (CAGR) from 2017 to 2021.
Scope AR has launched Remote AR, an augmented reality video support solution for Microsoft’s HoloLens AR headsets.
The San Francisco company is launching its enterprise-class AR solution to enable cross-platform live support video calls.
Remote AR for Microsoft HoloLens brings AR support for field technicians, enabling them to perform tasks with better speed and accuracy. It does so by allowing an expert to get on a video call with a technician and then mark the spot on the screen where the technician has to do something, like turn a screwdriver. The technician is able to see where the expert is pointing by looking at the AR overlay on the video scene.
Ultimately, VR in education will revolutionize not only how people learn but how they interact with real-world applications of what they have been taught. Imagine medical students performing an operation or geography students really seeing where and what Kathmandu is. The world just opens up to a rich abundance of possibilities.
5 technologies disrupting the app development industry — from cio.com by Kevin Rands Developers who want to be at the top of their game will need to roll with the times and constantly innovate, whether they’re playing around with new form factors or whether they’re learning to code in a new language.
Excerpts:
But with so much disruption on the horizon, what does this mean for app developers? Let’s find out.
Like many higher education institutions, Michigan State University offers a wide array of online programs. But unlike most other online universities, some programs involve robots.
Here’s how it works: online and in-person students gather in the same classroom. Self-balancing robots mounted with computers roll around the room, displaying the face of one remote student. Each remote student streams in and controls one robot, which allows them to literally and figuratively take a seat at the table.
Professor Christine Greenhow, who teaches graduate level courses in MSU’s College of Education, first encountered these robots at an alumni event.
“I thought, ‘Oh I could use this technology in my classroom. I could use this to put visual and movement cues back into the environment,’” Greenhow said.
From DSC: In my work to bring remote learners into face-to-face classrooms at Calvin College, I also worked with some of the tools shown/mentioned in that article — such as the Telepresence Robot from Double Robotics and the unit from Swivl. I also introduced Blackboard Collaborate and Skype as other methods of bringing in remote students (hadn’t yet tried Zoom, but that’s another possibility).
As one looks at the image above, one can’t help but wonder what such a picture will look like 5-10 years from now? Will it picture folks wearing VR-based headsets at their respective locations? Or perhaps some setups will feature the following types of tools within smaller “learning hubs” (which could also include one’s local Starbucks, Apple Store, etc.)?
WayRay makes augmented reality hardware and software for cars and drivers.
The company won a start-up competition at the Los Angeles Auto Show.
WayRay has also received an investment from Alibaba.
WayRay’s augmented reality driving system makes a car’s windshield look like a video game. The Swiss-based company that makes augmented reality for cars won the grand prize in a start-up competition at the Los Angeles Auto Show on Tuesday. WayRay makes a small device called Navion, which projects a virtual dashboard onto a driver’s windshield. The software can display information on speed, time of day, or even arrows and other graphics that can help the driver navigate, avoid hazards, and warn of dangers ahead, such as pedestrians. WayRay says that by displaying information directly on the windshield, the system allows drivers to stay better focused on the road. The display might appear similar to what a player would see on a screen in many video games. But the system also notifies the driver of potential points of interest along a route such as restaurants or other businesses.
Virtual reality is arguably a good medium for art: it not only enables creativity that just isn’t possible if you stick to physical objects, it allows you to share pieces that would be difficult to appreciate staring at an ordinary computer screen. And HTC knows it. The company is launching Vive Arts, a “multi-million dollar” program that helps museums and other institutions fund, develop and share art in VR. And yes, this means apps you can use at home… including one that’s right around the corner.
There are no room-scale sensors or controllers, because The Ochre Atelier, as the experience is called, is designed to be accessible to everyone regardless of computing expertise. And at roughly 6-7 minutes long, it’s also bite-size enough that hopefully every visitor to the exhibition can take a turn. Its length and complexity don’t make it any less immersive though. The experience itself is, superficially, a tour of Modigliani’s last studio space in Paris: a small, thin rectangular room a few floors above street level.
In all, it took five months to digitally re-create the space. A wealth of research went into The Ochre Atelier, from 3D mapping the actual room — the building is now a bed-and-breakfast — to looking at pictures and combing through first-person accounts of Modigliani’s friends and colleagues at the time. The developers at Preloaded took all this and built a historically accurate re-creation of what the studio would’ve looked like. You teleport around this space a few times, seeing it from different angles and getting more insight into the artist at each stop. Look at a few obvious “more info” icons from each perspective and you’ll hear narrated the words of those closest to Modigliani at the time, alongside some analyses from experts at the Tate.
Next-Gen Virtual Reality Will Let You Create From Scratch—Right Inside VR — from autodesk.com by Marcello Sgambelluri The architecture, engineering and construction (AEC) industry is about to undergo a radical shift in its workflow. In the near future, designers and engineers will be able to create buildings and cities, in real time, in virtual reality (VR).
Excerpt:
What’s Coming: Creation Still, these examples only scratch the surface of VR’s potential in AEC. The next big opportunity for designers and engineers will move beyond visualization to actually creating structures and products from scratch in VR. Imagine VR for Revit: What if you could put on an eye-tracking headset and, with the movement of your hands and wrists, grab a footing, scale a model, lay it out, push it, spin it, and change its shape?
How to be an ed tech futurist— from campustechnology.com by Bryan Alexander While no one can predict the future, these forecasting methods will help you anticipate trends and spur more collaborative thinking.
Excerpts:
Some of the forecasting methods Bryan mentions are:
Trend analysis
Environmental scanning
Scenarios
Science fiction
From DSC: I greatly appreciate the work that Bryan does — the topics that he chooses to write about, his analyses, comments, and questions are often thought-provoking. I couldn’t agree more with Bryan’s assertion that forecasting needs to become more realized/practiced within higher education. This is especially true given the exponential rate of change that many societies throughout the globe are now experiencing.
We need to be pulse-checking a variety of landscapes out there, to identify and put significant trends, forces, and emerging technologies on our radars. The strategy of identifying potential scenarios – and then developing responses to those potential scenarios — is very wise.
Similarly, messaging and social media are the killer apps of smartphones. Our need to connect with other people follows us, no matter where technology takes us. New technology succeeds when it makes what we are already doing better, cheaper, and faster. It naturally follows that Telepresence should likewise be one of the killer apps for both AR and VR. A video of Microsoft Research’s 2016 Holoportation experiment suggests Microsoft must have been working on this internally for some time, maybe even before the launch of the HoloLens itself.
Telepresence, meaning to be electronically present elsewhere, is not a new idea. As a result, the term describes a broad range of approaches to virtual presence. It breaks down into six main types:
Our need to connect with other people follows us, no matter where technology takes us.
The same techniques that generate images of smoke, clouds and fantastic beasts in movies can render neurons and brain structures in fine-grained detail.
Two projects presented yesterday at the 2017 Society for Neuroscience annual meeting in Washington, D.C., gave attendees a sampling of what these powerful technologies can do.
“These are the same rendering techniques that are used to make graphics for ‘Harry Potter’ movies,” says Tyler Ard, a neuroscientist in Arthur Toga’s lab at the University of Southern California in Los Angeles. Ard presented the results of applying these techniques to magnetic resonance imaging (MRI) scans.
The methods can turn massive amounts of data into images, making them ideally suited to generate brain scans. Ard and his colleagues develop code that enables them to easily enter data into the software. They plan to make the code freely available to other researchers.
After several cycles of development, it became clear that getting our process into VR as early as possible was essential. This was difficult to do within the landscape of VR tooling. So, at the beginning of 2017, we began developing features for early-stage VR prototyping in a tool named “Expo.”
… Start Prototyping in VR Now We developed Expo because the tools for collaborative prototyping did not exist at the start of this year. Since then, the landscape has dramatically improved and there are many tools providing prototyping workflows with no requirement to do your own development:
It’s no secret that one of the biggest issues holding back virtual and augmented reality is the lack of content.
Even as bigger studios and companies are sinking more and more money into VR and AR development, it’s still difficult for smaller, independent, developers to get started. A big part of the problem is that AR and VR apps require developers to create a ton of 3D objects, often an overwhelming and time-consuming process.
Google is hoping to fix that, though, with its new service called Poly, an online library of 3D objects developers can use in their own apps.
The model is a bit like Flickr, but for VR and AR developers rather than photographers. Anyone can upload their own 3D creations to the service and make them available to others via a Creative Commons license, and any developer can search and download objects for their own apps and games.
From DSC: I am honored to be currently serving on the 2018 Advisory Council for the Next Generation Learning Spaces Conference with a great group of people. Missing — at least from my perspective — from the image below is Kristen Tadrous, Senior Program Director with the Corporate Learning Network. Kristen has done a great job these last few years planning and running this conference.
NOTE:
The above graphic reflects a recent change for me. I am still an Adjunct Faculty Member
at Calvin College, but I am no longer a Senior Instructional Designer there.
My brand is centered around being an Instructional Technologist.
This national conference will be held in Los Angeles, CA on February 26-28, 2018. It is designed to help institutions of higher education develop highly-innovative cultures — something that’s needed in many institutions of traditional higher education right now.
I have attended the first 3 conferences and I moderated a panel at the most recent conference out in San Diego back in February/March of this year. I just want to say that this is a great conference and I encourage you to bring a group of people to it from your organization! I say a group of people because a group of 5 of us (from a variety of departments) went one year and the result of attending the NGLS Conference was a brand new Sandbox Classroom — an active-learning based, highly-collaborative learning space where faculty members can experiment with new pedagogies as well as with new technologies. The conference helped us discuss things as a diverse group, think out load, come up with some innovative ideas, and then build the momentum to move forward with some of those key ideas.
Per Kristen Tadrous, here’s why you want to check out USC:
A true leader in innovation: USC made it to the Top 20 of Reuter’s 100 Most Innovative Universities in 2017!
Detailed guided tour of leading spaces led by the Information Technology Services Learning Environments team
Benchmark your own learning environments by getting a ‘behind the scenes’ look at their state-of-the-art spaces
There are only 30 spots available for the site tour
Building Spaces to Inspire a Culture of Innovation — a core theme at the 4th Next Generation Learning Spaces summit, taking place this February 26-28 in Los Angeles. An invaluable opportunity to meet and hear from like-minded peers in higher education, and continue your path toward lifelong learning. #ngls2018 http://bit.ly/2yNkMLL
2018 marks the beginning of the end of smartphones in the world’s largest economies. What’s coming next are conversational interfaces with zero-UIs. This will radically change the media landscape, and now is the best time to start thinking through future scenarios.
In 2018, a critical mass of emerging technologies will converge finding advanced uses beyond initial testing and applied research. That’s a signal worth paying attention to. News organizations should devote attention to emerging trends in voice interfaces, the decentralization of content, mixed reality, new types of search, and hardware (such as CubeSats and smart cameras).
Journalists need to understand what artificial intelligence is, what it is not, and what it means for the future of news. AI research has advanced enough that it is now a core component of our work at FTI. You will see the AI ecosystem represented in many of the trends in this report, and it is vitally important that all decision-makers within news organizations familiarize themselves with the current and emerging AI landscapes. We have included an AI Primer For Journalists in our Trend Report this year to aid in that effort.
Decentralization emerged as a key theme for 2018. Among the companies and organizations FTI covers, we discovered a new emphasis on restricted peer-to-peer networks to detect harassment, share resources and connect with sources. There is also a push by some democratic governments around the world to divide internet access and to restrict certain content, effectively creating dozens of “splinternets.”
Consolidation is also a key theme for 2018. News brands, broadcast spectrum, and artificial intelligence startups will continue to be merged with and acquired by relatively few corporations. Pending legislation and policy in the U.S., E.U. and in parts of Asia could further concentrate the power among a small cadre of information and technology organizations in the year ahead.
To understand the future of news, you must pay attention to the future of many industries and research areas in the coming year. When journalists think about the future, they should broaden the usual scope to consider developments from myriad other fields also participating in the knowledge economy. Technology begets technology. We are witnessing an explosion in slow motion.
Those in the news ecosystem should factor the trends in this report into their strategic thinking for the coming year, and adjust their planning, operations and business models accordingly.
This year’s report has 159 trends.
This is mostly due to the fact that 2016 was the year that many areas of science and technology finally started to converge. As a result we’re seeing a sort of slow-motion explosion––we will undoubtedly look back on the last part of this decade as a pivotal moment in our history on this planet.
…
Our 2017 Trend Report reveals strategic opportunities and challenges for your organization in the coming year. The Future Today Institute’s annual Trend Report prepares leaders and organizations for the year ahead, so that you are better positioned to see emerging technology and adjust your strategy accordingly. Use our report to identify near-future business disruption and competitive threats while simultaneously finding new collaborators and partners. Most importantly, use our report as a jumping off point for deeper strategic planning.
Augmented and virtual reality offer ways to immerse learners in experiences that can aid training in processes and procedures, provide realistic simulations to deepen empathy and build communication skills, or provide in-the-workflow support for skilled technicians performing complex procedures.
Badges and other digital credentials provide new ways to assess and validate employees’ skills and mark their eLearning achievements, even if their learning takes place informally or outside of the corporate framework.
Chatbots are proving an excellent tool for spaced learning, review of course materials, guiding new hires through onboarding, and supporting new managers with coaching and tips.
Content curation enables L&D professionals to provide information and educational materials from trusted sources that can deepen learners’ knowledge and help them build skills.
eBooks, a relative newcomer to the eLearning arena, offer rich features for portable on-demand content that learners can explore, review, and revisit as needed.
Interactive videos provide branching scenarios, quiz learners on newly introduced concepts and terms, offer prompts for small-group discussions, and do much more to engage learners.
Podcasts can turn drive time into productive time, allowing learners to enjoy a story built around eLearning content.
Smartphone apps, available wherever learners take their phones or tablets, can be designed to offer product support, info for sales personnel, up-to-date information for repair technicians, and games and drills for teaching and reviewing content; the possibilities are limited only by designers’ imagination.
Social platforms like Slack, Yammer, or Instagram facilitate collaboration, sharing of ideas, networking, and social learning. Adopting social learning platforms encourages learners to develop their skills and contribute to their communities of practice, whether inside their companies or more broadly.
xAPI turns any experience into a learning experience. Adding xAPI capability to any suitable tool or platform means you can record learner activity and progress in a learning record store (LRS) and track it.
How does all of this relate to eLearning? Again, Webb anticipated the question. Her response gave hope to some—and terrified others. She presented three possible future scenarios:
Everyone in the learning arena learns to recognize weak signals; they work with technologists to refine artificial intelligence to instill values. Future machines learn not only to identify correct and incorrect answers; they also learn right and wrong. Webb said that she gives this optimistic scenario a 25 percent chance of occurring.
Everyone present is inspired by her talk but they, and the rest of the learning world, do nothing. Artificial intelligence continues to develop as it has in the past, learning to identify correct answers but lacking values. Webb’s prediction is that this pragmatic optimistic scenario has a 50 percent chance of occurring.
Learning and artificial intelligence continue to develop on separate tracks. Future artificial intelligence and machine learning projects incorporate real biases that affect what and how people learn and how knowledge is transferred. Webb said that she gives this catastrophic scenario a 25 percent chance of occurring.
In an attempt to end on a strong positive note, Webb said that “the future hasn’t happened yet—we think” and encouraged attendees to take action. “To build the future of learning that you want, listen to weak signals now.”
Emerging social VR platforms are experimenting with new ways of democratizing access and ownership of content and information.
VR has often been considered something of a solitary experience, but that’s changing fast. Social VR platforms are on the rise, and as the acquisition of AltspaceVR by Microsoft shows, major players in that space are taking notice.
…
This shows how momentum is building around social VR, and although it’s unlikely that such platforms will replace social media in terms of popularity overnight, the question is certainly being asked about who will emerge as “Facebook of VR.”
…
“We believe virtual reality will flourish once users have a more prominent role in controlling their creations. Currently, the companies that create the virtual worlds own all of the content built by the users. They are the ones who profit, reap the benefits from the network effects, and have the power to undo, change or censor what happens within the world itself. The true potential of VR might be realized, and certainly surpass what already exists, if this power were put into the hands of the users instead,” believes Ariel Meilich, founder of blockchain-based virtual platform Decentraland.
A blockchain is a digitized, decentralized public ledger of cryptocurrency transactions. Essentially each ‘block’ is like an individual bank statement. Completed ‘blocks’ (the most recent transactions) are added in chronological order allowing market participants to keep track of the transactions without the need for central record keeping. Just as Bitcoin eliminates the need for a third party to process or store payments, and isn’t regulated by a central authority, users in any blockchain structure are responsible for validating transactions whenever one party pays another for goods or services.
From DSC: As this article reminded me, it’s the combination of two or more emerging technologies that will likely bring major innovation our way. Here’s another example of that same idea/concept.
Online glasses retailer Warby Parker built its reputation by selling fashionable yet affordable eyeglasses, so it perhaps a surprise that it’s one of the first developers to take advantage of the technology in the least affordable iPhone yet.
While other developers are making adjusting to their apps to account for the infamous camera notch, Warby Parker decided to update its Glasses app to directly leverage the Face ID facial recognition system. Now, in the updated version of the app, Glasses can measure the user’s face to estimate which frames will fit best.
Apple Inc., seeking a breakthrough product to succeed the iPhone, aims to have technology ready for an augmented-reality headset in 2019 and could ship a product as early as 2020.
Unlike the current generation of virtual reality headsets that use a smartphone as the engine and screen, Apple’s device will have its own display and run on a new chip and operating system, according to people familiar with the situation. The development timeline is very aggressive and could still change, said the people, who requested anonymity to speak freely about a private matter.
“The power is that we can take the user anywhere in the entire universe throughout all of time for historical experiences like this.” (source)
Ask a robot to do the same and you’ll either get a blank stare or a crumpled object in the cold, cold grasp of a machine. Because robots are good at repetitive tasks that require a lot of strength, but they’re still bad at learning how to manipulate novel objects. Which is why today a company called Embodied Intelligence has emerged from stealth mode to fuse the strengths of robots and people into a new system that could make it far easier for regular folk to teach robots new tasks. Think of it like a VR videogame—only you get to control a hulking robot.
From DSC: To remain up-to-date, Engineering Departments within higher ed have their work cut out for them — big time! Those Senior Engineering Teams have many new, innovative pathways and projects to pursue these days.
Daqri has begun shipping its augmented reality smart glasses for the workplace.
Los Angeles-based Daqri is betting that AR — a technology that overlays digital animations on top of the real world — will take off first in the enterprise, where customers are willing to pay a higher price in order to solve complex problems. The idea is to help people solve real-world problems, like fixing a jet engine or piecing together an assembly. Daqri argues that the gains in productivity and efficiency make up for the initial cost.
At $4,995, the system is not cheap, but it is optimized to present complex workloads and process a lot of data right on the glasses themselves. It is available for direct purchase from Daqri’s web site and through channel partners. Daqri is targeting customers across manufacturing, field services, maintenance and repair, inspections, construction, and others.
The NBA really wants you to watch games in VR— from cnet.com by Terry Collins The basketball league has now struck two partnerships to broadcast games in virtual reality. Are fans willing to watch them?
Excerpt:
What’s keeping you from watching NBA games in VR?
Is it the bulky headsets? Is it the slow camera switches that don’t follow the players quickly enough? Is it too expensive?
The NBA is betting that one reason is it just doesn’t have enough partnerships yet. So, the league is teaming up with Turner Sports and Intel TrueVR to air weekly games on TNT in VR starting with the All-Star weekend festivities from Los Angeles in February.
russell-westbrook-alley-oop.jpg
NBA fans will soon be able to see more of MVP Russell Westbrook in virtual reality.
NBAE/Getty Images
This partnership represents a doubling down of NBA’s VR efforts, despite indications it isn’t actually working. Last year, the NBA began airing games with NextVR as part of a multiyear deal.
At the Microsoft Future Decoded conference in London, executives from the tech giant offered a vision for integrating Microsoft 365, Microsoft HoloLens, Windows Mixed Reality, and 3D capabilities into modern workplaces to aid digital transformation.
Firstline workers and information workers will likely be the first to benefit from mixed reality in the workplace, using the technology for collaboration, training, and more.
Microsoft has made a number of moves into the mixed reality space recently, including expanding its HoloLens headset into new European markets.