Facial recognition smart glasses could make public surveillance discreet and ubiquitous — from theverge.com by James Vincent; with thanks to Mr. Paul Czarapata, Ed.D. out on Twitter for this resource
A new product from UAE firm NNTC shows where this tech is headed next. <– From DSC: though hopefully not!!!

Excerpt:

From train stations and concert halls to sport stadiums and airports, facial recognition is slowly becoming the norm in public spaces. But new hardware formats like these facial recognition-enabled smart glasses could make the technology truly ubiquitous, able to be deployed by law enforcement and private security any time and any place.

The glasses themselves are made by American company Vuzix, while Dubai-based firm NNTC is providing the facial recognition algorithms and packaging the final product.

 

From DSC…I commented out on Twitter:

Thanks Paul for this posting – though I find it very troubling. Emerging technologies race out ahead of society. It would be interested in knowing the age of the people developing these technologies and if they care about asking the tough questions…like “Just because we can, should we be doing this?”

 

Addendum on 6/12/19:

 

 

Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

The Common Sense Census: Inside the 21st-Century Classroom

21st century classroom - excerpt from infographic

Excerpt:

Technology has become an integral part of classroom learning, and students of all ages have access to digital media and devices at school. The Common Sense Census: Inside the 21st-Century Classroom explores how K–12 educators have adapted to these critical shifts in schools and society. From the benefits of teaching lifelong digital citizenship skills to the challenges of preparing students to critically evaluate online information, educators across the country share their perspectives on what it’s like to teach in today’s fast-changing digital world.

 

 

Three ways to use video feedback to enhance student engagement — from scholarlyteacher.com by Christopher Penna

Excerpt:

An innovative approach for providing feedback on student work in a variety of disciplines is the use of screen capture videos (Mathisen). These videos allow for the recording of what is on the instructor’s screen (for example, a student paper) accompanied by audio narration describing strengths and weaknesses of the work being discussed as well as any edits that the instructor is making on the page. Once created, the video is available to the student for repeated viewing. Research indicates these videos provide more concrete and effective guidance for students and a higher level of student engagement than traditional written comments and rubrics (Jones, Georghiades, & Gunson, 2012; Thompson & Lee, 2012).

 

 

 

 
 
 

Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

Cambridge library installation gives readers control of their sensory space — from cambridge.wickedlocal.com by Hannah Schoenbaum

Excerpts:

A luminous igloo-shaped structure in the front room of the Cambridge Public Library beckoned curious library visitors during the snowy first weekend of March, inviting them to explore a space engineered for everyone, yet uniquely their own.

Called “Alterspace” and developed by Harvard’s metaLAB and Library Innovation Lab, this experiment in adaptive architecture granted the individual control over the sensory elements in his or her space. A user enters the LED-illuminated dome to find headphones, chairs and an iPad on a library cart, which displays six modes: Relax, Read, Meditate, Focus, Create and W3!Rd.

From the cool blues and greens of Relax mode to a rainbow overload of excitement in the W3!Rd mode, Alterspace is engineered to transform its lights, sounds and colors into the ideal environment for a particular action.

 

 

From DSC:
This brings me back to the question/reflection…in the future, will students using VR headsets be able to study by a brook? An ocean? In a very quiet library (i.e., the headset would come with solid noise cancellation capabilities build into it)?  This type of room/capability would really be helpful for our daughter…who is easily distracted and doesn’t like noise.

 

 

Map of fundamental technologies in legal services — from remakinglawfirms.com by Michelle Mahoney

Excerpt:
The Map is designed to help make sense of the trends we are observing:

  • an increasing number of legal technology offerings;
  • the increasing effectiveness of legal technologies;
  • emerging new categories of legal technology;
  • the layering and combining of fundamental technology capabilities; and
  • the maturation of machine learning, natural language processing and deep learning artificial intelligence.

Given the exponential nature of the technologies, the Fundamental Technologies Map can only depict the landscape at the current point in time.

 

Information processing in legal services (PDF file)

 

Also see:
Delta Model Update: The Most Important Area of Lawyer Competency — Personal Effectiveness Skills — from legalexecutiveinstitute.comby Natalie Runyon

Excerpt:

Many legal experts say the legal industry is at an inflection point because the pace of change is being driven by many factors — technology, client demand, disaggregation of matter workflow, the rise of Millennials approaching mid-career status, and the faster pace of business in general.

The fact that technology spend by law firms continues to be a primary area of investment underscores the fact that the pace of change is continuing to accelerate with the ongoing rise of big data and workflow technology that are greatly influencing how lawyering gets done. Moreover, combined with big unstructured data, artificial intelligence (AI) is creating opportunities to analyze siloed data sets to gain insights in numerous new ways.

 

 

Collaboration technology is fueling enterprise transformation – increasing agility, driving efficiency and improving productivity. Join Amy Chang at Enterprise Connect where she will share Cisco’s vision for the future of collaboration, the foundations we have in place and the amazing work we’re driving to win our customers’ hearts and minds. Cognitive collaboration – technology that weaves context and intelligence across applications, devices and workflows, connecting people with customers & colleagues, to deliver unprecedented experiences and transform how we work – is at the heart of our efforts. Join this session to see our technology in action and hear how our customers are using our portfolio of products today to transform the way they work.

 

 

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Best camera for vlogging 2019: 10 perfect choices tested — from techradar.com by Matthew Richards
Here are our top 10 vlogging camera picks

 

From DSC:
Also, with a different kind of camera in mind…and with a shout out to Mr. Charles Mickens (CIO / Associate Dean of Innovation and Technology at the WMU-Cooley Law School) see the amazing Light L16 Camera:

 

 

A Little Bit of Light from light on Vimeo.

 

 

Smart speakers hit critical mass in 2018 — from techcrunch.com by Sarah Perez

Excerpt (emphasis DSC):

We already know Alexa had a good Christmas — the app shot to the top of the App Store over the holidays, and the Alexa service even briefly crashed from all the new users. But Alexa, along with other smart speaker devices like Google Home, didn’t just have a good holiday — they had a great year, too. The smart speaker market reached critical mass in 2018, with around 41 percent of U.S. consumers now owning a voice-activated speaker, up from 21.5 percent in 2017.

 

In the U.S., there are now more than 100 million Alexa-enabled devices installed — a key milestone for Alexa to become a “critical mass platform,” the report noted.

 

 
© 2025 | Daniel Christian