The Section 508 Refresh and What It Means for Higher Education — from er.educause.edu by Martin LaGrow

Excerpts (emphasis DSC):

Higher education should now be on notice: Anyone with an Internet connection can now file a complaint or civil lawsuit, not just students with disabilities. And though Section 508 was previously unclear as to the expectations for accessibility, the updated requirements add specific web standards to adhere to — specifically, the Web Content Accessibility Guidelines (WCAG) 2.0 level AA developed by the World Wide Web Consortium (W3C).

Although WCAG 2.0 has been around since the early 2000s, it was developed by web content providers as a self-regulating tool to create uniformity for web standards around the globe. It was understood to be best practices but was not enforced by any regulating agency. The Section 508 refresh due in January 2018 changes this, as WCAG 2.0 level AA has been adopted as the standard of expected accessibility. Thus, all organizations subject to Section 508, including colleges and universities, that create and publish digital content — web pages, documents, images, videos, audio — must ensure that they know and understand these standards.

Reacting to the Section 508 Refresh
In a few months, the revised Section 508 standards become enforceable law. As stated, this should not be considered a threat or burden but rather an opportunity for institutions to check their present level of commitment and adherence to accessibility. In order to prepare for the update in standards, a number of proactive steps can easily be taken:

  • Contract a third-party expert partner to review institutional accessibility policies and practices and craft a long-term plan to ensure compliance.
  • Review all public-facing websites and electronic documents to ensure compliance with WCAG 2.0 Level AA standards.
  • Develop and publish a policy to state the level of commitment and adherence to Section 508 and WCAG 2.0 Level AA.
  • Create an accessibility training plan for all individuals responsible for creating and publishing electronic content.
  • Ensure all ICT contracts, ROIs, and purchases include provisions for accessibility.
  • Inform students of their rights related to accessibility, as well as where to address concerns internally. Then support the students with timely resolutions.

As always, remember that the pursuit of accessibility demonstrates a spirit of inclusiveness that benefits everyone. Embracing the challenge to meet the needs of all students is a noble pursuit, but it’s not just an adoption of policy. It’s a creation of awareness, an awareness that fosters a healthy shift in culture. When this is the approach, the motivation to support all students drives every conversation, and the fear of legal repercussions becomes secondary. This should be the goal of every institution of learning.

 

 

 

Program Easily Converts Molecules to 3D Models for 3D Printing, Virtual and Augmented Reality — from 3dprint.com

Excerpt:

At North Carolina State University, Assistant Professor of Chemistry Denis Fourches uses technology to research the effectiveness of new drugs. He uses computer programs to model interactions between chemical compounds and biological targets to predict the effectiveness of the compound, narrowing the field of drug candidates for testing. Lately, he has been using a new program that allows the user to create 3D models of molecules for 3D printing, plus augmented and virtual reality applications.

RealityConvert converts molecular objects like proteins and drugs into high-quality 3D models. The models are generated in standard file formats that are compatible with most augmented and virtual reality programs, as well as 3D printers. The program is specifically designed for creating models of chemicals and small proteins.

 

 

 

 

 

 

Mozilla just launched an augmented reality app — from thenextweb.com by Matthew Hughes

Excerpt:

Mozilla has launched its first ever augmented reality app for iOS. The company, best known for its Firefox browser, wants to create an avenue for developers to build augmented reality experiences using open web technologies, WebXR, and Apple’s ARKit framework.

This latest effort from Mozilla is called WebXR Viewer. It contains several sample AR programs, demonstrating its technology in the real world. One is a teapot, suspended in the air. Another contains holographic silhouettes, which you can place in your immediate vicinity. Should you be so inclined, you can also use it to view your own WebXR creations.

 

 

Airbnb is replacing the guest book with augmented reality — from qz.com by Mike Murphy

Excerpt:

Airbnb announced today (Dec.11) that it’s experimenting with augmented- and virtual-reality technologies to enhance customers’ travel experiences.

The company showed off some simple prototype ideas in a blog post, detailing how VR could be used to explore apartments that customers may want to rent, from the comfort of their own homes. Hosts could scan apartments or houses to create 360-degree images that potential customers could view on smartphones or VR headsets.

It also envisioned an augmented-reality system where hosts could leave notes and instructions to their guests as they move through their apartment, especially if their house’s setup is unusual. AR signposts in the Airbnb app could help guide guests through anything confusing more efficiently than the instructions hosts often leave for their guests.

 

 

This HoloLens App Wants to Kickstart Collaborative Mixed Reality — from vrscout.com by Alice Bonasio

Excerpt:

Now Object Theory has just released a new collaborative computing application for the HoloLens called Prism, which takes many of the functionalities they’ve been developing for those clients over the past couple of years, and offers them to users in a free Windows Store application.

 

 

 

 

Virtual and Augmented Reality to Nearly Double Each Year Through 2021 — from campustechnology.com by Joshua Bolkan

Excerpt:

Spending on augmented and virtual reality will nearly double in 2018, according to a new forecast from International Data Corp. (IDC), growing from $9.1 billion in 2017 to $17.8 billion next year. The market research company predicts that aggressive growth will continue throughout its forecast period, achieving an average 98.8 percent compound annual growth rate (CAGR) from 2017 to 2021.

 

 

A look at the new BMW i3s in augmented reality with Apple’s ARKit — from electrek.co by Fred Lambert

 

 

 

 

Scope AR brings remote video tech support calls to HoloLens — from by Dean Takahashi

Excerpt:

Scope AR has launched Remote AR, an augmented reality video support solution for Microsoft’s HoloLens AR headsets.

The San Francisco company is launching its enterprise-class AR solution to enable cross-platform live support video calls.

Remote AR for Microsoft HoloLens brings AR support for field technicians, enabling them to perform tasks with better speed and accuracy. It does so by allowing an expert to get on a video call with a technician and then mark the spot on the screen where the technician has to do something, like turn a screwdriver. The technician is able to see where the expert is pointing by looking at the AR overlay on the video scene.

 

 

 

 

Virtual Reality: The Next Generation Of Education, Learning and Training — from forbes.com by Kris Kolo

Excerpt:

Ultimately, VR in education will revolutionize not only how people learn but how they interact with real-world applications of what they have been taught. Imagine medical students performing an operation or geography students really seeing where and what Kathmandu is. The world just opens up to a rich abundance of possibilities.

 

 

 

From DSC:

After looking at the items below, I wondered…

How soon before teachers/professors/trainers can quickly reconfigure their rooms’ settings via their voices? For example, faculty members will likely soon be able to quickly establish lighting, volume levels, blinds, or other types of room setups with their voices. This could be in addition to the use of beacons and smartphones that automatically recognize who just walked into the room and how that person wants the room to be configured on startup.

This functionality is probably already here…I just don’t know about it yet.

 


Somfy Adds Voice Control for Motorized Window Coverings with Amazon Alexa — form ravepubs.com by Sara Abrons


 

Also see:

 


 

 

5 technologies disrupting the app development industry — from cio.com by Kevin Rands
Developers who want to be at the top of their game will need to roll with the times and constantly innovate, whether they’re playing around with new form factors or whether they’re learning to code in a new language.

Excerpts:

But with so much disruption on the horizon, what does this mean for app developers? Let’s find out.

  1. AI and machine Learning
  2. The Internet of Things
  3. Blockchain
  4. Self-driving tech
  5. AR and VR

 

 

 

Robots in the Classroom: How a Program at Michigan State Is Taking Blended Learning to New Places — from news.elearninginside.com by Henry Kronk; with thanks to my friend and colleague, Mr. Dave Goodrich over at MSU, for his tweet on this.

Excerpt:

Like many higher education institutions, Michigan State University offers a wide array of online programs. But unlike most other online universities, some programs involve robots.

Here’s how it works: online and in-person students gather in the same classroom. Self-balancing robots mounted with computers roll around the room, displaying the face of one remote student. Each remote student streams in and controls one robot, which allows them to literally and figuratively take a seat at the table.

Professor Christine Greenhow, who teaches graduate level courses in MSU’s College of Education, first encountered these robots at an alumni event.

“I thought, ‘Oh I could use this technology in my classroom. I could use this to put visual and movement cues back into the environment,’” Greenhow said.

 

 

From DSC:
In my work to bring remote learners into face-to-face classrooms at Calvin College, I also worked with some of the tools shown/mentioned in that article — such as the Telepresence Robot from Double Robotics and the unit from Swivl.  I also introduced Blackboard Collaborate and Skype as other methods of bringing in remote students (hadn’t yet tried Zoom, but that’s another possibility).

As one looks at the image above, one can’t help but wonder what such a picture will look like 5-10 years from now? Will it picture folks wearing VR-based headsets at their respective locations? Or perhaps some setups will feature the following types of tools within smaller “learning hubs” (which could also include one’s local Starbucks, Apple Store, etc.)?

 

 

 

 

 

 

 

 

 

Excerpt:

Artificial Intelligence has leapt to the forefront of global discourse, garnering increased attention from practitioners, industry leaders, policymakers, and the general public. The diversity of opinions and debates gathered from news articles this year illustrates just how broadly AI is being investigated, studied, and applied. However, the field of AI is still evolving rapidly and even experts have a hard time understanding and tracking progress across the field.

Without the relevant data for reasoning about the state of AI technology, we are essentially “flying blind” in our conversations and decision-making related to AI.

Created and launched as a project of the One Hundred Year Study on AI at Stanford University (AI100), the AI Index is an open, not-for-profit project to track activity and progress in AI. It aims to facilitate an informed conversation about AI that is grounded in data. This is the inaugural annual report of the AI Index, and in this report we look at activity and progress in Artificial Intelligence through a range of perspectives. We aggregate data that exists freely on the web, contribute original data, and extract new metrics from combinations of data series.

All of the data used to generate this report will be openly available on the AI Index website at aiindex.org. Providing data, however, is just the beginning. To become truly useful, the AI Index needs support from a larger community. Ultimately, this report is a call for participation. You have the ability to provide data, analyze collected data, and make a wish list of what data you think needs to be tracked. Whether you have answers or questions to provide, we hope this report inspires you to reach out to the AI Index and become part of the effort to ground the conversation about AI.

 

 

 

Ask About AI: The Future of Learning and Work — from gettingsmart.com by Tom Vander Ark

Excerpts:

Code that learns may prove to be the most important invention in human history. But in 2016, there was almost no discussion of the implications of artificial intelligence (AI) in K-12 education—either the immense implications for the employment landscape or the exciting potential to improve learning.

We spent two years studying the implications of AI and concluded that machine intelligence turbocharged by big data and enabling technologies like robotics is the most significant change force facing humanity. Given enormous benefits and challenges we’re just beginning to understand, we believe it is an important time to Ask About AI (#AskAboutAI).

After interviewing experts, hosting a dozen community conversations, and posting more than 50 articles we’re summarizing what we’ve learned in a new paper Ask About AI: The Future of Learning and Work.

The paper explores what’s happening in the automation economy, the civic and social implications, and how to prepare ourselves and our children for exponential change.

With this launch we’re also launching a new microsite on Future of Work.

 

 

 

 

To initiate lifelong learning, secondary schools should encourage students to be reflect on how they learn, and build habits of success. There are an increasing number of organizations interested in being lifelong learning partners for students—college alumni associations, professional schools and private marketplaces among them.

Self-directed learning is most powerfully driven by a sense of purpose. In our study of Millennial employment, Generation Do It Yourself, we learned that it is critical for young people to develop a sense of purpose before attending college to avoid the new worst-case scenario—racking up college debt and dropping out. A sense of purpose can be developed around a talent or issue, or their intersection; both can be cultivated by a robust guidance system.

We’ve been teaching digital literacy for two decades, but what’s new is that we all need to appreciate that algorithms curate every screen we see. As smart machines augment our capabilities, they will increasingly influence our perceptions, opportunities and decisions. That means that to self- and social awareness, we’ll soon need to add AI awareness.

Taken together, these skills and dispositions create a sense of agency—the ability to take ownership of learning, grow through effort and work with other people in order to do the learning you need to do.

 

 

 

 

Augmented reality will transform city life — from venturebeat.com by Michael Park

Excerpts:

I’ve interviewed three AR entrepreneurs who explain three key ways that AR is set to transform urban living.

  • The real world will be indexed
  • Commuting will be smarter and safer
  • Language will be less of a barrier

 

 

 

Virtual Reality Devices – Where They Are Now and Where They’re Going — from iqsdirectory.com

Excerpts:

The questions now are:

  • What are the actual VR devices available ?
  • Are they reasonably priced?
  • What do they do?
  • What are they going to do?

We try to answer those questions [here in this article].

In this early stage, the big question becomes, “What’s next?”.

  • Integration of non-VR devices with VR users
  • Move away from needing a top-notch PC (or any PC)
  • Controllers will be your hands

 

 

Alibaba-backed augmented reality start-up makes driving look like a video game — from cnbc.com by Robert Ferris

  • WayRay makes augmented reality hardware and software for cars and drivers.
  • The company won a start-up competition at the Los Angeles Auto Show.
  • WayRay has also received an investment from Alibaba.

 

 

WayRay’s augmented reality driving system makes a car’s windshield look like a video game. The Swiss-based company that makes augmented reality for cars won the grand prize in a start-up competition at the Los Angeles Auto Show on Tuesday. WayRay makes a small device called Navion, which projects a virtual dashboard onto a driver’s windshield. The software can display information on speed, time of day, or even arrows and other graphics that can help the driver navigate, avoid hazards, and warn of dangers ahead, such as pedestrians. WayRay says that by displaying information directly on the windshield, the system allows drivers to stay better focused on the road. The display might appear similar to what a player would see on a screen in many video games. But the system also notifies the driver of potential points of interest along a route such as restaurants or other businesses.

 

 

 

HTC’s VR arts program brings exhibits to your home — from engadget.com by Jon Fingas
Vive Arts helps creators produce and share work in VR.

Exerpt:

Virtual reality is arguably a good medium for art: it not only enables creativity that just isn’t possible if you stick to physical objects, it allows you to share pieces that would be difficult to appreciate staring at an ordinary computer screen. And HTC knows it. The company is launching Vive Arts, a “multi-million dollar” program that helps museums and other institutions fund, develop and share art in VR. And yes, this means apps you can use at home… including one that’s right around the corner.

 

 

 

VR at the Tate Modern’s Modigliani exhibition is no gimmick — from engadget.com by Jamie Rigg
‘The Ochre Atelier’ experience is an authentic addition.

Excerpt:

There are no room-scale sensors or controllers, because The Ochre Atelier, as the experience is called, is designed to be accessible to everyone regardless of computing expertise. And at roughly 6-7 minutes long, it’s also bite-size enough that hopefully every visitor to the exhibition can take a turn. Its length and complexity don’t make it any less immersive though. The experience itself is, superficially, a tour of Modigliani’s last studio space in Paris: a small, thin rectangular room a few floors above street level.

In all, it took five months to digitally re-create the space. A wealth of research went into The Ochre Atelier, from 3D mapping the actual room — the building is now a bed-and-breakfast — to looking at pictures and combing through first-person accounts of Modigliani’s friends and colleagues at the time. The developers at Preloaded took all this and built a historically accurate re-creation of what the studio would’ve looked like. You teleport around this space a few times, seeing it from different angles and getting more insight into the artist at each stop. Look at a few obvious “more info” icons from each perspective and you’ll hear narrated the words of those closest to Modigliani at the time, alongside some analyses from experts at the Tate.

 

 

 

Real human holograms for augmented, virtual and mixed reality — from 8i.com; with thanks to Lisa Dawley for her Tweet on this
Create, distribute and experience volumetric video of real people that look and feel as if they’re in the same room.

 

 

 

Next-Gen Virtual Reality Will Let You Create From Scratch—Right Inside VR — from autodesk.com by Marcello Sgambelluri
The architecture, engineering and construction (AEC) industry is about to undergo a radical shift in its workflow. In the near future, designers and engineers will be able to create buildings and cities, in real time, in virtual reality (VR).

Excerpt:

What’s Coming: Creation
Still, these examples only scratch the surface of VR’s potential in AEC. The next big opportunity for designers and engineers will move beyond visualization to actually creating structures and products from scratch in VR. Imagine VR for Revit: What if you could put on an eye-tracking headset and, with the movement of your hands and wrists, grab a footing, scale a model, lay it out, push it, spin it, and change its shape?

 

 

 

AI: Embracing the promises and realities — from the Allegis Group

Excerpts:

What will that future be? When it comes to jobs, the tea leaves are indecipherable as analysts grapple with emerging technologies, new fields of work, and skills that have yet to be conceived. The only certainty is
that jobs will change. Consider the conflicting predictions put forth by the analyst community:

  • According to the Organization of Economic Cooperation and Development, only 5-10% of labor would be displaced by intelligent automation, and new job creation will offset losses.  (Inserted comment from DSC: Hmmm. ONLY 5-10%!? What?! That’s huge! And don’t count on the majority of those people becoming experts in robotics, algorithms, big data, AI, etc.)
  • The World Economic Forum27 said in 2016 that 60% of children entering school today will work in jobs that do not yet exist.
  • 47% of all American job functions could be automated within 20 years, according to the Oxford Martin School on Economics in a 2013 report.
  • In 2016, a KPMG study estimated that 100 million global knowledge workers could be affected by robotic process automation by 2025.

Despite the conflicting views, most analysts agree on one thing: big change is coming. Venture Capitalist David Vandergrift has some words of advice: “Anyone not planning to retire in the next 20 years should be paying pretty close attention to what’s going on in the realm of AI. The supplanting (of jobs) will not happen overnight: the trend over the next couple of decades is going to be towards more and more automation.”30

While analysts may not agree on the timing of AI’s development in the economy, many companies are already seeing its impact on key areas of talent and business strategy. AI is replacing jobs, changing traditional roles, applying pressure on knowledge workers, creating new fields of work, and raising the demand for certain skills.

 

 

 

 

 

The emphasis on learning is a key change from previous decades and rounds of automation. Advanced AI is, or will soon be, capable of displacing a very wide range of labor, far beyond the repetitive, low-skill functions traditionally thought to be at risk from automation. In many cases, the pressure on knowledge workers has already begun.

 

 

 

 

Regardless of industry, however, AI is a real challenge to today’s way of thinking about work, value, and talent scarcity. AI will expand and eventually force many human knowledge workers to reinvent their roles to address issues that machines cannot process. At the same time, AI will create a new demand for skills to guide its growth and development. These emerging areas of expertise will likely be technical or knowledge-intensive fields. In the near term, the competition for workers in these areas may change how companies focus their talent strategies.

 

 

 

 

How artificial intelligence could transform government — from Deloitte University Press
Cognitive technologies have the potential to revolutionize the public sector—and save billions of dollars

Excerpt:

The rise of more sophisticated cognitive technologies is, of course, critical to that third era, aiding advances in several categories:

  • Rules-based systems capture and use experts’ knowledge to provide answers to tricky but routine problems. As this decades-old form of AI grows more sophisticated, users may forget they aren’t conversing with a real person.
  • Speech recognition transcribes human speech automatically and accurately. The technology is improving as machines collect more examples of conversation. This has obvious value for dictation, phone assistance, and much more.
  • Machine translation, as the name indicates, translates text or speech from one language to another. Significant advances have been made in this field in only the past year.8 Machine translation has obvious implications for international relations, defense, and intelligence as well as, in our multilingual society, numerous domestic applications.
  • Computer vision is the ability to identify objects, scenes, and activities in naturally occurring images. It’s how Facebook sorts millions of users’ photos, but it can also scan medical images for indications of disease and identify criminals from surveillance footage. Soon it will allow law enforcement to quickly scan license plate numbers of vehicles stopped at red lights, identifying suspects’ cars in real time.
  • Machine learning takes place without explicit programming. By trial and error, computers learn how to learn, mining information to discover patterns in data that can help predict future events. The larger the datasets, the easier it is to accurately gauge normal or abnormal behavior. When your email program flags a message as spam, or your credit card company warns you of a potentially fraudulent use of your card, machine learning may be involved. Deep learning is a branch of machine learning involving artificial neural networks inspired by the brain’s structure and function.9
  • Robotics is the creation and use of machines to perform automated physical functions. The integration of cognitive technologies such as computer vision with sensors and other sophisticated hardware has given rise to a new generation of robots that can work alongside people and perform many tasks in unpredictable environments. Examples include drones, robots used for disaster response, and robot assistants in home health care.
  • Natural language processing refers to the complex and difficult task of organizing and understanding language in a human way. This goes far beyond interpreting search queries, or translating between Mandarin and English text. Combined with machine learning, a system can scan websites for discussions of specific topics even if the user didn’t input precise search terms. Computers can identify all the people and places mentioned in a document or extract terms and conditions from contracts. As with all AI-enabled technology, these become smarter as they consume more accurate data—and as developers integrate complementary technologies such as machine translation and natural language processing.

We’ve developed a framework that can help government agencies assess their own opportunities for deploying these technologies. It involves examining business processes, services, and programs to find where cognitive technologies may be viable, valuable, or even vital. Figure 8 summarizes this “Three Vs” framework. Government agencies can use it to screen the best opportunities for automation or cognitive technologies.

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2017 | Daniel Christian