Are we there yet? Impactful technologies and the power to influence change — from campustechnology.com by Mary Grush and Ellen Wagner

Excerpt:

Learning analytics, augmented reality, artificial intelligence, and other new and emerging technologies seem poised to change the business of higher education — yet, we often hear comments like “We’re just not there yet…” or “This is a technology that is just too slow to adoption…” or other observations that make it clear that many people — including those with a high level of expertise in education technology — are thinking that the promise is not yet fulfilled. Here, CT talks with veteran education technology leader Ellen Wagner, to ask for her perspectives on the adoption of impactful technologies — in particular the factors in our leadership and development communities that have the power to influence change.

 

 

9 amazing uses for VR and AR in college classrooms — from campustechnology.com by Dian Schaffhauser
Immersive technologies can help students understand theoretical concepts more easily, prepare them for careers through simulated experiences and keep them engaged in learning.

Excerpt:

Immersive reality is bumping us into the deep end, virtually speaking. Colleges and universities large and small are launching new labs and centers dedicated to research on the topics of augmented reality, virtual reality and 360-degree imaging. The first academic conference held completely in virtual reality recently returned for its second year, hosted on Twitch by Lethbridge College in Alberta and Centennial College in Toronto. Majors in VR and AR have begun popping up in higher education across the United States, including programs at the Savannah School of Design (GA), Shenandoah University (VA) and Drexel University Westphal (PA). Educause experts have most recently positioned the timing for broad adoption of these technologies in education at the two-year to three-year horizon. And Gartner has predicted that by the year 2021, 60 percent of higher education institutions in the United States will “intentionally” be using VR to create simulations and put students into immersive environments.

If you haven’t already acquired your own headset or applied for a grant from your institution to test out AR or VR for instruction, it’s time. We’ve done a scan of some of the most interesting projects currently taking place in American classrooms to help you imagine the virtual possibilities.

 


 

 

8 industrial IoT trends of 2019 that cannot be ignored — from datafloq.com

Excerpt:

From manufacturing to the retail sector, the infinite applications of the industrial internet of things are disrupting business processes, thereby improving operational efficiency and business competitiveness. The trend of employing IoT-powered systems for supply chain management, smart monitoring, remote diagnosis, production integration, inventory management, and predictive maintenance is catching up as companies take bold steps to address a myriad of business problems.

No wonder, the global technology spend on IoT is expected to reach USD 1.2 trillion by 2022. The growth of this segment will be driven by firms deploying IIoT solutions and giant tech organizations who are developing these innovative solutions.

To help you stay ahead of the curve, we have enlisted a few trends that will dominate the industrial IoT sphere.

 

5. 5G Will Drive Real-Time IIoT Applications
5G deployments are digitizing the industrial domain and changing the way enterprises manage their business operations. Industries, namely transportation, manufacturing, healthcare, energy and utilities, agriculture, retail, media, and financial services will benefit from the low latency and high data transfer speed of 5G mobile networks.

 

10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

From DSC:
Re: the Learning from the Living [Class] Room vision of a next gen learning platform

 

Learning from the Living Class Room

 

…wouldn’t it be cool if you could use your voice to ask your smart/connected “TV” type of device:

“Show me the test questions for Torts I from WMU-Cooley Law School. Cooley could then charge $0.99 for these questions.”

Then, the system knows how you did on answering those questions. The ones you got right, you don’t get asked to review as often as the ones you got wrong. As you get a question right more often, the less you are asked to answer it.

You sign up for such streams of content — and the system assesses you periodically. This helps a person keep certain topics/information fresh in their memory. This type of learning method would be incredibly helpful for students trying to pass the Bar or other types of large/summative tests — especially when a student has to be able to recall information that they learned over the last 3-5 years.

Come to think of it…this method could help all of us in learning new disciplines/topics throughout our lifetimes. Sign up for the streams of content that you want to learn more about…and drop the (no-longer relevant) subscriptions as needed..

 

We need to tap into streams of content in our next gen learning platform

 

After nearly a decade of Augmented World Expo (AWE), founder Ori Inbar unpacks the past, present, & future of augmented reality — from next.reality.news by Adario Strange

Excerpts:

I think right now it’s almost a waste of time to talk about a hybrid device because it’s not relevant. It’s two different devices and two different use cases. But like you said, sometime in the future, 15, 20, 50 years, I imagine a point where you could open your eyes to do AR, and close your eyes to do VR.

I think there’s always room for innovation, especially with spatial computing where we’re in the very early stages. We have to develop a new visual approach that I don’t think we have yet. What does it mean to interact in a world where everything is visual and around you, and not on a two-dimensional screen? So there’s a lot to do there.

 

A big part of mainstream adoption is education. Until you get into AR and VR, you don’t really know what you’re missing. You can’t really learn about it from videos. And that education takes time. So the education, plus the understanding of the need, will create a demand.

— Ori Inbar

 

 

The Common Sense Census: Inside the 21st-Century Classroom

21st century classroom - excerpt from infographic

Excerpt:

Technology has become an integral part of classroom learning, and students of all ages have access to digital media and devices at school. The Common Sense Census: Inside the 21st-Century Classroom explores how K–12 educators have adapted to these critical shifts in schools and society. From the benefits of teaching lifelong digital citizenship skills to the challenges of preparing students to critically evaluate online information, educators across the country share their perspectives on what it’s like to teach in today’s fast-changing digital world.

 

 
 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

Video: Chatbots’ History and Future — from which-50.com by Joseph Brookes

Excerpt:

What’s Next For Chatbots?
One area where chatbots will have an increasing impact in the future is language, according to Kraeutler. He argues the further integration of language services from the likes of Google will bring down processing times in multilingual scenarios.

“Having a chatbot where a consumer can very easily speak in their native tongue and use services like Google to provide real-time translation — and increasingly very accurate real-time translation. That allows the bot to respond to the consumer, again, very accurately, but also in their native tongue.”

That translation feature, Kraeutler says, will also be vital in assisted conversations — where bots assist human agents to provide next-best actions — allowing the two human parties to converse in near real time in their native languages.

 

From DSC:
This is much more than a Voice Response Unit (VRU) Phase II…the educational realm should watch what happens with chatbots…as they could assist with doing some heavy lifting in the learning world.

 

 
© 2025 | Daniel Christian