Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

From DSC:
First of all, an article:

The four definitive use cases for AR and VR in retail — from forbes.com by Nikki Baird

AR in retail

Excerpt (emphasis DSC):

AR is the go-to engagement method of choice when it comes to product and category exploration. A label on a product on a shelf can only do so much to convey product and brand information, vs. AR, which can easily tap into a wealth of digital information online and bring it to life as an overlay on a product or on the label itself.

 

From DSC:
Applying this concept to the academic world…what might this mean for a student in a chemistry class who has a mobile device and/or a pair of smart goggles on and is working with an Erlenmeyer flask? A burette? A Bunsen burner?

Along these lines...what if all of those confused students — like *I* was struggling through chem lab — could see how an experiment was *supposed to be done!?*

That is, if there’s only 30 minutes of lab time left, the professor or TA could “flip a switch” to turn on the AR cloud within the laboratory space to allow those struggling students to see how to do their experiment.

I can’t tell you how many times I was just trying to get through the lab — not knowing what I was doing, and getting zero help from any professor or TA. I hardly learned a thing that stuck with me…except the names of a few devices and the abbreviations of a few chemicals. For the most part, it was a waste of money. How many students experience this as well and feel like I did?

Will the terms “blended learning” and/or “hybrid learning” take on whole new dimensions with the onset of AR, MR, and VR-related learning experiences?

#IntelligentTutoring #IntelligentSystems #LearningExperiences
#AR #VR #MR #XR #ARCloud #AssistiveTechnologies
#Chemistry #BlendedLearning #HybridLearning #DigitalLearning

 

Also see:

 

“It is conceivable that we’re going to be moving into a world without screens, a world where [glasses are] your screen. You don’t need any more form factor than [that].”

(AT&T CEO)

 

 

Skills gap? Augmented reality can beam in expertise across the enterprise — from by Greg Nichols
Hives of subject matter experts could man augmented reality switchboards, transferring knowledge to field.

Excerpt:

Some 10 million manufacturing jobs will likely be needed in the coming decade, yet many of those will likely go unfilled, according to Deloitte and the Manufacturing Institute. Somewhat ironically, one of the biggest factors holding back a strong American manufacturing segment in 2019 may not be cheap foreign labor but unqualified U.S. labor.

Augmented reality, which is still trying to find its stride in the enterprise, could help by serving as a conduit for on-the-job knowledge transfer.

“We are excited to offer industrial enterprises a new way to use AR to leverage the tribal knowledge of subject matter experts (SMEs) and help alleviate the skills gap crisis threatening today’s industrial enterprise,” says Mike Campbell, EVP, augmented reality products, PTC.

 

From DSC:
First a posting that got me to wondering about something that I’ve previously wondered about from time to time…

College of Business unveils classroom of the future — from biz.source.colostate.edu by Joe Giordano

Excerpt:

Equipped with a wall of 27 high-definition video screens as well as five high-end cameras, the newest classroom in Colorado State University’s College of Business is designed to connect on-campus and online students in a whole new way.

The College of Business unveiled on March 29 the “Room of the Future,” featuring Mosaic, an innovative technology – powered by mashme.io – that creates a blended classroom experience, connecting on-campus and online students in real time.

 

From DSC:
If the pedagogies could be worked out, this could be a very attractive model for many people in the future as it:

  • Provides convenience.
  • Offers more choice. More control. (Students could pick whether they want to attend the class virtually or in a physical classroom).

If the resulting increase in students could bring down the price of offering the course, will we see this model flourish in the near future? 

For struggling colleges and universities, could this help increase the ROI of offering their classes on their physical campuses?

The technologies behind this are not cheap though…and that could be a show-stopper for this type of an experiment. But…thinking out loud again…what if there were a cheaper way to view a group of other people in your learning community? Perhaps there will be a solution using some form of Extended Reality (XR)…hmmm….

 

 

 

 

 

 

 

 

Also see:

 

Also see:

Learning from the Living Class Room

 

 

Cambridge library installation gives readers control of their sensory space — from cambridge.wickedlocal.com by Hannah Schoenbaum

Excerpts:

A luminous igloo-shaped structure in the front room of the Cambridge Public Library beckoned curious library visitors during the snowy first weekend of March, inviting them to explore a space engineered for everyone, yet uniquely their own.

Called “Alterspace” and developed by Harvard’s metaLAB and Library Innovation Lab, this experiment in adaptive architecture granted the individual control over the sensory elements in his or her space. A user enters the LED-illuminated dome to find headphones, chairs and an iPad on a library cart, which displays six modes: Relax, Read, Meditate, Focus, Create and W3!Rd.

From the cool blues and greens of Relax mode to a rainbow overload of excitement in the W3!Rd mode, Alterspace is engineered to transform its lights, sounds and colors into the ideal environment for a particular action.

 

 

From DSC:
This brings me back to the question/reflection…in the future, will students using VR headsets be able to study by a brook? An ocean? In a very quiet library (i.e., the headset would come with solid noise cancellation capabilities build into it)?  This type of room/capability would really be helpful for our daughter…who is easily distracted and doesn’t like noise.

 

 

Virtual embodiment: High impact learning — from tlinnovations.cikeys.com by

Excerpt:

It’s officially been one year since we started exploring immersive virtual learning with nursing students, starting with the Embodied Labs product:Becoming Alfred. The virtual product consists of an immersive simulated experience using virtual reality (VR) designed by Embodied Labs. Embodied Labs has three scenario series, referred to as labs:

  • The Alfred Lab: Learners experience life as Alfred, a 74-year old African American male with macular degeneration and hearing loss.
  • The Beatriz Lab: A Journey Through Alzheimer’s Disease. The learner becomes, Beatriz, a middle-late aged Latina woman who transitions from early to middle to late stages of Alzheimer’s disease.
  • The Clay Lab: End of Life Conversations. Learners become Clay, a 66-year old male, with a terminal diagnosis whose experiences include receiving a terminal diagnosis,  hospice care at home, and the active dying process at the end-of-life.

 

From DSC:
I moderated a panel back at the NGLS Conference in 2017, and Carrie was one of the panelists talking about some of the promising applications of virtual reality. Carrie is doing marvelous work! Carrie’s mom had Alzheimer’s and my mom has that as well (as did my grandmother). It’s a tough disease to watch develop. Perhaps a student reading this out there will be the person to find a solution to this enormous issue.

 

 

 

Collaboration technology is fueling enterprise transformation – increasing agility, driving efficiency and improving productivity. Join Amy Chang at Enterprise Connect where she will share Cisco’s vision for the future of collaboration, the foundations we have in place and the amazing work we’re driving to win our customers’ hearts and minds. Cognitive collaboration – technology that weaves context and intelligence across applications, devices and workflows, connecting people with customers & colleagues, to deliver unprecedented experiences and transform how we work – is at the heart of our efforts. Join this session to see our technology in action and hear how our customers are using our portfolio of products today to transform the way they work.

 

 

 

 

A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

The 10+ best real-world examples of augmented reality — from forbes.com by Bernard Marr

Excerpt:

Augmented reality (AR) can add value, solve problems and enhance the user experience in nearly every industry. Businesses are catching on and increasing investments to drive the growth of augmented reality, which makes it a crucial part of the tech economy.

 

As referenced by Bernard in his above article:

 

 

From DSC:
Along these lines, I really appreciate the “translate” feature within Twitter. It helps open up whole new avenues of learning for me from people across the globe. A very cool, practical, positive, beneficial feature/tool!!!

 

 

Microsoft’s HoloLens 2: A $3,500 mixed reality headset for the factory, not the living room — from theverge.com by Dieter Bohn

Excerpt:

The HoloLens 2 is only being sold to corporations, not to consumers. It’s designed for what Kipman calls “first-line workers,” people in auto shops, factory floors, operating rooms, and out in the field fixing stuff. It’s designed for people who work with their hands and find it difficult to integrate a computer or smartphone into their daily work. Kipman wants to replace the grease-stained Windows 2000 computer sitting in the corner of the workroom. It’s pretty much the same decision Google made for Google Glass.

“If you think about 7 billion people in the world, people like you and I — knowledge workers — are by far the minority,” he replies. To him, the workers who will use this are “maybe people that are fixing our jet propulsion engine. Maybe they are the people that are in some retail space. Maybe they’re the doctors that are operating on you in an operating room.”

He continues, saying it’s for “people that have been, in a sense, neglected or haven’t had access to technology [in their hands-on jobs] because PCs, tablets, phones don’t really lend themselves to those experiences.”

 

Also see:

Microsoft is making a new HoloLens headset, called HoloLens 2. But, it’s only getting sold to companies, not consumers. Meant for professionals who work with their hands and not on computers, the new HoloLens has an improved field of view and doesn’t clip as much as the original. Dieter Bohn visited Microsoft’s campus to get an early look at the new HoloLens 2 headset.

 

 

 

Addendum on 2/28/19:

Microsoft launches HoloLens 2 mixed-reality headset, betting on holograms in the workplace — from cnbc.com by Elizabeth Schulze

Excerpts:

  • Microsoft unveiled HoloLens 2, an upgraded version of its mixed-reality headset, on Sunday at Mobile World Congress in Barcelona.
  • The new headset will cost $3500, lower than the cost of the earlier version.
  • The HoloLens 2 launch comes amid controversy over Microsoft’s $480 million deal to sell 100,000 of its mixed reality headsets to the U.S. Army.

Microsoft unveiled HoloLens 2, an upgraded version of its mixed-reality headset, on Sunday in Barcelona, in a bet that doubles down on the idea that businesses will increasingly use hologram technology in the workplace.

The HoloLens 2 headset will cost $3500 —$1500 less than the commercial price of the first HoloLens device Microsoft released more than four years ago.

 

 

Philips, Microsoft Unveils Augmented Reality Concept for Operating Room of the Future — from hitconsultant.net by Fred Pennic

Excerpt:

Health technology company Philips unveiled a unique mixed reality concept developed together with Microsoft Corp. for the operating room of the future. Based on the state-of-the-art technologies of Philips’Azurion image-guided therapy platform and Microsoft’s HoloLens 2 holographic computing platform, the companies will showcase novel augmented reality applications for image-guided minimally invasive therapies.

 

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Mirrorworld v. AR Cloud or: How I learned to stop worrying and love the spatial future — from medium.com by Ori Inbar

Excerpts (emphasis DSC):

An exact digital replica of the real world is an essential infrastructure, but it’s only part of the meaning of the new spatial computing platform. Unless you are Snow White’s step mother (or Lord Farquaad), the mirror is merely a reflection of the real world; it doesn’t enhance it. The Augmented content overlaid on top of the world’s digital replica is what’s really interesting: “context, meaning, and function” in Kelly’s words. Without it — it’s like the Internet before the Web — great potential, used by few. Hence my initial instinct to include Augmented Reality in the moniker. So should we keep looking for a better term that captures the “augmented” sauce on top of the mirror ? Can’t we simply settle on “Spatial Computing”…?

Ask any millennial and she’ll confirm: “I need info about what’s in front of me right now” — what’s this Restaurant, this object, that person? And she is sick of searching it the old fashioned way.

The New Spatial Economy

Changing how information is organized will profoundly disrupt the Web economy. A handful of companies became giants thanks to the current model. No wonder they are all contenders in the battle for AR Cloud dominance. The Web Economy was defined by “clicks on links” (CPM/CPC). The AR Cloud-based spatial economy will transition to what I like to call “clicks on bricks” — a punning rhyme that captures a new world where everything is driven by digital interaction with the physical world.

 

From DSC:
Hmmm….where everything is driven by digital interaction with the physical world.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian