From DSC: After reviewing the article below, I wondered...if we need to interact with content to learn it…how might mixed reality allow for new ways of interacting with such content? This is especially intriguing when we interact with that content with others as well (i.e., social learning).
Perhaps Mixed Reality (MR) will bring forth a major expansion of how we look at “blended learning” and “hybrid learning.”
Changing How We Perceive The World One Industry At A Time Part of the reason mixed reality has garnered this momentum within such short span of time is that it promises to revolutionize how we perceive the world without necessarily altering our natural perspective. While VR/AR invites you into their somewhat complex worlds, mixed reality analyzes the surrounding real-world environment before projecting an enhanced and interactive overlay. It essentially “mixes” our reality with a digitally generated graphical information.
…
All this, however, pales in comparison to the impact of mixed reality on the storytelling process. While present technologies deliver content in a one-directional manner, from storyteller to audience, mixed reality allows for delivery of content, then interaction between content, creator and other users.This mechanism cultivates a fertile ground for increased contact between all participating entities, ergo fostering the creation of shared experiences. Mixed reality also reinvents the storytelling process. By merging the storyline with reality, viewers are presented with a wholesome experience that’s perpetually indistinguishable from real life.
Mixed reality is without a doubt going to play a major role in shaping our realities in the near future, not just because of its numerous use cases but also because it is the flag bearer of all virtualized technologies. It combines VR, AR and other relevant technologies to deliver a potent cocktail of digital excellence.
Higher education should now be on notice: Anyone with an Internet connection can now file a complaint or civil lawsuit, not just students with disabilities. And though Section 508 was previously unclear as to the expectations for accessibility, the updated requirements add specific web standards to adhere to — specifically, the Web Content Accessibility Guidelines (WCAG) 2.0 level AA developed by the World Wide Web Consortium (W3C).
…
Although WCAG 2.0 has been around since the early 2000s, it was developed by web content providers as a self-regulating tool to create uniformity for web standards around the globe. It was understood to be best practices but was not enforced by any regulating agency. The Section 508 refresh due in January 2018 changes this, as WCAG 2.0 level AA has been adopted as the standard of expected accessibility. Thus, all organizations subject to Section 508, including colleges and universities, that create and publish digital content — web pages, documents, images, videos, audio — must ensure that they know and understand these standards.
…
Reacting to the Section 508 Refresh In a few months, the revised Section 508 standards become enforceable law. As stated, this should not be considered a threat or burden but rather an opportunity for institutions to check their present level of commitment and adherence to accessibility. In order to prepare for the update in standards, a number of proactive steps can easily be taken:
Contract a third-party expert partner to review institutional accessibility policies and practices and craft a long-term plan to ensure compliance.
Review all public-facing websites and electronic documents to ensure compliance with WCAG 2.0 Level AA standards.
Develop and publish a policy to state the level of commitment and adherence to Section 508 and WCAG 2.0 Level AA.
Create an accessibility training plan for all individuals responsible for creating and publishing electronic content.
Ensure all ICT contracts, ROIs, and purchases include provisions for accessibility.
Inform students of their rights related to accessibility, as well as where to address concerns internally. Then support the students with timely resolutions.
As always, remember that the pursuit of accessibility demonstrates a spirit of inclusiveness that benefits everyone. Embracing the challenge to meet the needs of all students is a noble pursuit, but it’s not just an adoption of policy. It’s a creation of awareness, an awareness that fosters a healthy shift in culture. When this is the approach, the motivation to support all students drives every conversation, and the fear of legal repercussions becomes secondary. This should be the goal of every institution of learning.
Als0 see:
How to Make Accessibility Part of the Landscape— from insidehighered.com by Mark Lieberman A small institution in Vermont caters to students with disabilities by letting them choose the technology that suits their needs.
Excerpt:
Accessibility remains one of the key issues for digital learning professionals looking to catch up to the needs of the modern student. At last month’s Online Learning Consortium Accelerate conference, seemingly everyone in attendance hoped to come away with new insights into this thorny concern.
Landmark College in Vermont might offer some guidance. The private institution with approximately 450 students exclusively serves students with diagnosed learning disabilities, attention disorders or autism. Like all institutions, it’s still grappling with how best to serve students in the digital age, whether in the classroom or at a distance. Here’s a glimpse at the institution’s philosophy, courtesy of Manju Banerjee, Landmark’s vice president for educational research and innovation since 2011.
At North Carolina State University, Assistant Professor of Chemistry Denis Fourches uses technology to research the effectiveness of new drugs. He uses computer programs to model interactions between chemical compounds and biological targets to predict the effectiveness of the compound, narrowing the field of drug candidates for testing. Lately, he has been using a new program that allows the user to create 3D models of molecules for 3D printing, plus augmented and virtual reality applications.
RealityConvert converts molecular objects like proteins and drugs into high-quality 3D models. The models are generated in standard file formats that are compatible with most augmented and virtual reality programs, as well as 3D printers. The program is specifically designed for creating models of chemicals and small proteins.
Mozilla has launched its first ever augmented reality app for iOS. The company, best known for its Firefox browser, wants to create an avenue for developers to build augmented reality experiences using open web technologies, WebXR, and Apple’s ARKit framework.
This latest effort from Mozilla is called WebXR Viewer. It contains several sample AR programs, demonstrating its technology in the real world. One is a teapot, suspended in the air. Another contains holographic silhouettes, which you can place in your immediate vicinity. Should you be so inclined, you can also use it to view your own WebXR creations.
Airbnb announced today (Dec.11) that it’s experimenting with augmented- and virtual-reality technologies to enhance customers’ travel experiences.
The company showed off some simple prototype ideas in a blog post, detailing how VR could be used to explore apartments that customers may want to rent, from the comfort of their own homes. Hosts could scan apartments or houses to create 360-degree images that potential customers could view on smartphones or VR headsets.
It also envisioned an augmented-reality system where hosts could leave notes and instructions to their guests as they move through their apartment, especially if their house’s setup is unusual. AR signposts in the Airbnb app could help guide guests through anything confusing more efficiently than the instructions hosts often leave for their guests.
Now Object Theory has just released a new collaborative computing application for the HoloLens called Prism, which takes many of the functionalities they’ve been developing for those clients over the past couple of years, and offers them to users in a free Windows Store application.
Spending on augmented and virtual reality will nearly double in 2018, according to a new forecast from International Data Corp. (IDC), growing from $9.1 billion in 2017 to $17.8 billion next year. The market research company predicts that aggressive growth will continue throughout its forecast period, achieving an average 98.8 percent compound annual growth rate (CAGR) from 2017 to 2021.
Scope AR has launched Remote AR, an augmented reality video support solution for Microsoft’s HoloLens AR headsets.
The San Francisco company is launching its enterprise-class AR solution to enable cross-platform live support video calls.
Remote AR for Microsoft HoloLens brings AR support for field technicians, enabling them to perform tasks with better speed and accuracy. It does so by allowing an expert to get on a video call with a technician and then mark the spot on the screen where the technician has to do something, like turn a screwdriver. The technician is able to see where the expert is pointing by looking at the AR overlay on the video scene.
Ultimately, VR in education will revolutionize not only how people learn but how they interact with real-world applications of what they have been taught. Imagine medical students performing an operation or geography students really seeing where and what Kathmandu is. The world just opens up to a rich abundance of possibilities.
How soon before teachers/professors/trainers can quickly reconfigure their rooms’ settings via their voices? For example, faculty members will likely soon be able to quickly establish lighting, volume levels, blinds, or other types of room setups with their voices. This could be in addition to the use of beacons and smartphones that automatically recognize who just walked into the room and how that person wants the room to be configured on startup.
This functionality is probably already here…I just don’t know about it yet.
5 technologies disrupting the app development industry — from cio.com by Kevin Rands Developers who want to be at the top of their game will need to roll with the times and constantly innovate, whether they’re playing around with new form factors or whether they’re learning to code in a new language.
Excerpts:
But with so much disruption on the horizon, what does this mean for app developers? Let’s find out.
Like many higher education institutions, Michigan State University offers a wide array of online programs. But unlike most other online universities, some programs involve robots.
Here’s how it works: online and in-person students gather in the same classroom. Self-balancing robots mounted with computers roll around the room, displaying the face of one remote student. Each remote student streams in and controls one robot, which allows them to literally and figuratively take a seat at the table.
Professor Christine Greenhow, who teaches graduate level courses in MSU’s College of Education, first encountered these robots at an alumni event.
“I thought, ‘Oh I could use this technology in my classroom. I could use this to put visual and movement cues back into the environment,’” Greenhow said.
From DSC: In my work to bring remote learners into face-to-face classrooms at Calvin College, I also worked with some of the tools shown/mentioned in that article — such as the Telepresence Robot from Double Robotics and the unit from Swivl. I also introduced Blackboard Collaborate and Skype as other methods of bringing in remote students (hadn’t yet tried Zoom, but that’s another possibility).
As one looks at the image above, one can’t help but wonder what such a picture will look like 5-10 years from now? Will it picture folks wearing VR-based headsets at their respective locations? Or perhaps some setups will feature the following types of tools within smaller “learning hubs” (which could also include one’s local Starbucks, Apple Store, etc.)?
Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public. The diversity of opinions and debates gathered from news articles this year illustrates just how broadly AI is being investigated, studied, and applied. However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field.
Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI.
Created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), the AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data. This is the inaugural annual report of the AI Index, and in this report we look at activity and progress in Artificial Intelligence through a range of perspectives. We aggregate data that exists freely on the web, contribute original data, and extract new metrics from combinations of data series.
All of the data used to generate this report will be openly available on the AI Index website at aiindex.org. Providing data, however, is just the beginning. To become truly useful, the AI Index needs support from a larger community. Ultimately, this report is a call for participation. You have the ability to provide data, analyze collected data, and make a wish list of what data you think needs to be tracked. Whether you have answers or questions to provide, we hope this report inspires you to reach out to the AI Index and become part of the effort to ground the conversation about AI.
Code that learns may prove to be the most important invention in human history. But in 2016, there was almost no discussion of the implications of artificial intelligence (AI) in K-12 education—either the immense implications for the employment landscape or the exciting potential to improve learning.
We spent two years studying the implications of AI and concluded that machine intelligence turbocharged by big data and enabling technologies like robotics is the most significant change force facing humanity. Given enormous benefits and challenges we’re just beginning to understand, we believe it is an important time to Ask About AI (#AskAboutAI).
…
After interviewing experts, hosting a dozen community conversations, and posting more than 50 articles we’re summarizing what we’ve learned in a new paper Ask About AI: The Future of Learning and Work.
The paper explores what’s happening in the automation economy, the civic and social implications, and how to prepare ourselves and our children for exponential change.
With this launch we’re also launching a new microsite on Future of Work.
To initiate lifelong learning, secondary schools should encourage students to be reflect on how they learn, and build habits of success. There are an increasing number of organizations interested in being lifelong learning partners for students—college alumni associations, professional schools and private marketplaces among them.
Self-directed learning is most powerfully driven by a sense of purpose. In our study of Millennial employment, Generation Do It Yourself, we learned that it is critical for young people to develop a sense of purpose before attending college to avoid the new worst-case scenario—racking up college debt and dropping out. A sense of purpose can be developed around a talent or issue, or their intersection; both can be cultivated by a robust guidance system.
We’ve been teaching digital literacy for two decades, but what’s new is that we all need to appreciate that algorithms curate every screen we see. As smart machines augment our capabilities, they will increasingly influence our perceptions, opportunities and decisions. That means that to self- and social awareness, we’ll soon need to add AI awareness.
Taken together, these skills and dispositions create a sense of agency—the ability to take ownership of learning, grow through effort and work with other people in order to do the learning you need to do.
WayRay makes augmented reality hardware and software for cars and drivers.
The company won a start-up competition at the Los Angeles Auto Show.
WayRay has also received an investment from Alibaba.
WayRay’s augmented reality driving system makes a car’s windshield look like a video game. The Swiss-based company that makes augmented reality for cars won the grand prize in a start-up competition at the Los Angeles Auto Show on Tuesday. WayRay makes a small device called Navion, which projects a virtual dashboard onto a driver’s windshield. The software can display information on speed, time of day, or even arrows and other graphics that can help the driver navigate, avoid hazards, and warn of dangers ahead, such as pedestrians. WayRay says that by displaying information directly on the windshield, the system allows drivers to stay better focused on the road. The display might appear similar to what a player would see on a screen in many video games. But the system also notifies the driver of potential points of interest along a route such as restaurants or other businesses.
Virtual reality is arguably a good medium for art: it not only enables creativity that just isn’t possible if you stick to physical objects, it allows you to share pieces that would be difficult to appreciate staring at an ordinary computer screen. And HTC knows it. The company is launching Vive Arts, a “multi-million dollar” program that helps museums and other institutions fund, develop and share art in VR. And yes, this means apps you can use at home… including one that’s right around the corner.
There are no room-scale sensors or controllers, because The Ochre Atelier, as the experience is called, is designed to be accessible to everyone regardless of computing expertise. And at roughly 6-7 minutes long, it’s also bite-size enough that hopefully every visitor to the exhibition can take a turn. Its length and complexity don’t make it any less immersive though. The experience itself is, superficially, a tour of Modigliani’s last studio space in Paris: a small, thin rectangular room a few floors above street level.
In all, it took five months to digitally re-create the space. A wealth of research went into The Ochre Atelier, from 3D mapping the actual room — the building is now a bed-and-breakfast — to looking at pictures and combing through first-person accounts of Modigliani’s friends and colleagues at the time. The developers at Preloaded took all this and built a historically accurate re-creation of what the studio would’ve looked like. You teleport around this space a few times, seeing it from different angles and getting more insight into the artist at each stop. Look at a few obvious “more info” icons from each perspective and you’ll hear narrated the words of those closest to Modigliani at the time, alongside some analyses from experts at the Tate.
Next-Gen Virtual Reality Will Let You Create From Scratch—Right Inside VR — from autodesk.com by Marcello Sgambelluri The architecture, engineering and construction (AEC) industry is about to undergo a radical shift in its workflow. In the near future, designers and engineers will be able to create buildings and cities, in real time, in virtual reality (VR).
Excerpt:
What’s Coming: Creation Still, these examples only scratch the surface of VR’s potential in AEC. The next big opportunity for designers and engineers will move beyond visualization to actually creating structures and products from scratch in VR. Imagine VR for Revit: What if you could put on an eye-tracking headset and, with the movement of your hands and wrists, grab a footing, scale a model, lay it out, push it, spin it, and change its shape?
What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:
According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses. (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.
Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30
While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.
The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.
Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.
The rise of more sophisticated cognitive technologies is, of course, critical to that third era, aiding advances in several categories:
Rules-based systems capture and use experts’ knowledge to provide answers to tricky but routine problems. As this decades-old form of AI grows more sophisticated, users may forget they aren’t conversing with a real person.
Speech recognition transcribes human speech automatically and accurately. The technology is improving as machines collect more examples of conversation. This has obvious value for dictation, phone assistance, and much more.
Machine translation, as the name indicates, translates text or speech from one language to another. Significant advances have been made in this field in only the past year.8 Machine translation has obvious implications for international relations, defense, and intelligence as well as, in our multilingual society, numerous domestic applications.
Computer vision is the ability to identify objects, scenes, and activities in naturally occurring images. It’s how Facebook sorts millions of users’ photos, but it can also scan medical images for indications of disease and identify criminals from surveillance footage. Soon it will allow law enforcement to quickly scan license plate numbers of vehicles stopped at red lights, identifying suspects’ cars in real time.
Machine learning takes place without explicit programming. By trial and error, computers learn how to learn, mining information to discover patterns in data that can help predict future events. The larger the datasets, the easier it is to accurately gauge normal or abnormal behavior. When your email program flags a message as spam, or your credit card company warns you of a potentially fraudulent use of your card, machine learning may be involved. Deep learning is a branch of machine learning involving artificial neural networks inspired by the brain’s structure and function.9
Robotics is the creation and use of machines to perform automated physical functions. The integration of cognitive technologies such as computer vision with sensors and other sophisticated hardware has given rise to a new generation of robots that can work alongside people and perform many tasks in unpredictable environments. Examples include drones, robots used for disaster response, and robot assistants in home health care.
Natural language processing refers to the complex and difficult task of organizing and understanding language in a human way. This goes far beyond interpreting search queries, or translating between Mandarin and English text. Combined with machine learning, a system can scan websites for discussions of specific topics even if the user didn’t input precise search terms. Computers can identify all the people and places mentioned in a document or extract terms and conditions from contracts. As with all AI-enabled technology, these become smarter as they consume more accurate data—and as developers integrate complementary technologies such as machine translation and natural language processing.
…
We’ve developed a framework that can help government agencies assess their own opportunities for deploying these technologies. It involves examining business processes, services, and programs to find where cognitive technologies may be viable, valuable, or even vital. Figure 8 summarizes this “Three Vs” framework. Government agencies can use it to screen the best opportunities for automation or cognitive technologies.
The same techniques that generate images of smoke, clouds and fantastic beasts in movies can render neurons and brain structures in fine-grained detail.
Two projects presented yesterday at the 2017 Society for Neuroscience annual meeting in Washington, D.C., gave attendees a sampling of what these powerful technologies can do.
“These are the same rendering techniques that are used to make graphics for ‘Harry Potter’ movies,” says Tyler Ard, a neuroscientist in Arthur Toga’s lab at the University of Southern California in Los Angeles. Ard presented the results of applying these techniques to magnetic resonance imaging (MRI) scans.
The methods can turn massive amounts of data into images, making them ideally suited to generate brain scans. Ard and his colleagues develop code that enables them to easily enter data into the software. They plan to make the code freely available to other researchers.
After several cycles of development, it became clear that getting our process into VR as early as possible was essential. This was difficult to do within the landscape of VR tooling. So, at the beginning of 2017, we began developing features for early-stage VR prototyping in a tool named “Expo.”
… Start Prototyping in VR Now We developed Expo because the tools for collaborative prototyping did not exist at the start of this year. Since then, the landscape has dramatically improved and there are many tools providing prototyping workflows with no requirement to do your own development:
It’s no secret that one of the biggest issues holding back virtual and augmented reality is the lack of content.
Even as bigger studios and companies are sinking more and more money into VR and AR development, it’s still difficult for smaller, independent, developers to get started. A big part of the problem is that AR and VR apps require developers to create a ton of 3D objects, often an overwhelming and time-consuming process.
Google is hoping to fix that, though, with its new service called Poly, an online library of 3D objects developers can use in their own apps.
The model is a bit like Flickr, but for VR and AR developers rather than photographers. Anyone can upload their own 3D creations to the service and make them available to others via a Creative Commons license, and any developer can search and download objects for their own apps and games.
Creative voice tech — from jwtintelligence.com by Ella Britton New programs from Google and the BBC use voice to steer storytelling with digital assistants.
Exceprt:
BBC Radio’s newest program, The Inspection Chamber, uses smart home devices to allow listeners to interact with and control the plot. Amid a rise in choose-your-own-adventure style programming, The Inspection Chamber opens up creative new possibilities for brands hoping to make use of voice assistants.
The Inspection Chamber tells the story of an alien stranded on earth, who is being interrogated by scientists and an AI robot called Dave. In this interactive drama, Amazon Echo and Google Home users play the part of the alien, answering questions and interacting with other characters to determine the story’s course. The experience takes around 20 minutes, with questions like “Cruel or Kind?” and “Do you like puzzles?” that help the scientists categorize a user.
…
“Voice is going to be a key way we interact with media, search for content, and find what we want,” said BBC director general Tony Hall. As describedin the Innovation Group’s Speak Easy report, the opportunities for brands to connect with consumers via smart speakers are substantial, particularly when it comes to education and entertainment. Brands can also harness these opportunities by creating entertaining and engaging content that consumers can interact with, creating what feels like a two-way dialogue.
From DSC: More and more, our voices will drive the way we interact with computing devices/applications. This item was an especially interesting item to me, as it involves the use of our voice — at home — to steer the storytelling that’s taking place. Talk about a new form of interactivity!
We need academia to step up to fill in the gaps in our collective understanding about the new role of technology in shaping our lives. We need robust research on hiring algorithms that seem to filter out people with mental health disorders, sentencing algorithms that fail twice as often for black defendants as for white defendants, statistically flawed public teacher assessments or oppressive scheduling algorithms. And we need research to ensure that the same mistakes aren’t made again and again. It’s absolutely within the abilities of academic research to study such examples and to push against the most obvious statistical, ethical or constitutional failures and dedicate serious intellectual energy to finding solutions. And whereas professional technologists working at private companies are not in a position to critique their own work, academics theoretically enjoy much more freedom of inquiry.
There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology — and specifically, the algorithms that are responsible for so many decisions — in our lives.
There’s one solution for the short term. We urgently need an academic institute focused on algorithmic accountability. First, it should provide a comprehensive ethical training for future engineers and data scientists at the undergraduate and graduate levels, with case studies taken from real-world algorithms that are choosing the winners from the losers. Lecturers from humanities, social sciences and philosophy departments should weigh in.
Somewhat related:
More than 50 experts just told DHS that using AI for “extreme vetting” is dangerously misguided — from qz.com by Dave Gershgorn Excerpt:
A group of experts from Google, Microsoft, MIT, NYU, Stanford, Spotify, and AI Now are urging (pdf) the Department of Homeland Security to reconsider using automated software powered by machine learning to vet immigrants and visitors trying to enter the United States.
As we increase the usage of chatbots in our personal lives, we will expect to use them in the workplace to assist us with things like finding new jobs, answering frequently asked HR related questions or even receiving coaching and mentoring. Chatbots digitize HR processes and enable employees to access HR solutions from anywhere. Using artificial intelligence in HR will create a more seamless employee experience, one that is nimbler and more user driven.