We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

The growing marketplace for AI ethics — from forbes.com by Forbes Insights with Intel AI

Excerpt:

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow*—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.

 

*Insert DSC:
Read this as a very powerful, chaotic, massive WILD, WILD, WEST.  Can law schools, legislatures, governments, businesses, and more keep up with this new pace of technological change?

 

Also see:

 

The moral issue here — from law21.ca by Jordan Furlong

Excerpt:

“I’m not worried about the moral issue here,” said Gordon Caplan, the co-chair of AmLaw 100 law firm Wilkie Farr, according to transcripts of wiretaps in the college admission scandal that you’re already starting to forget about. Mr. Caplan was concerned that if his daughter “was caught …she’d be finished,” and that her faked ACT score should not be set “too high” and therefore not be credible. Beyond that, all we know from the transcripts about Mr. Caplan’s ethical qualms is that “to be honest, it feels a little weird. But.”

That’s the line that stays with me, right through the “But” at the end. I want to tell you why, and I especially want to tell you if you’re a law student or a new lawyer, because it is extraordinarily important that you understand what’s going on here.

So why does any of this matter to lawyers, especially to young lawyers? Because of that one line I quoted.

“I mean this is, to be honest, it feels a little weird. But.”

Do you recognize that sound? That’s the sound of a person’s conscience, a lawyer’s conscience, struggling to make its voice heard.

This one apparently can’t muster much more than a twinge of doubt, a feeling of discomfort, a nagging sense of this isn’t right and I shouldn’t be doing it. It lasts for only a second, though, because the next word fatally undermines it. But. Yeah, I know, at some fundamental level, this is wrong. But.

It doesn’t matter what rationalization or justification follows the But, because at this point, it’s all over. The battle has been abandoned. If the next word out of his mouth had been So or Therefore, Mr. Caplan’s life would have gone in a very different direction.

 

 

 

MIT has just announced a $1 billion plan to create a new college for AI — from technologyreview.com

Excerpt:

One of the birthplaces of artificial intelligence, MIT, has announced a bold plan to reshape its academic program around the technology. With $1 billion in funding, MIT will create a new college that combines AI, machine learning, and data science with other academic disciplines. It is the largest financial investment in AI by any US academic institution to date.

 

From this page:

The College will:

  • reorient MIT to bring the power of computing and AI to all fields of study at MIT, allowing the future of computing and AI to be shaped by insights from all other disciplines;
  • create 50 new faculty positions that will be located both within the College and jointly with other departments across MIT — nearly doubling MIT’s academic capability in computing and AI;
  • give MIT’s five schools a shared structure for collaborative education, research, and innovation in computing and AI;
  • educate students in every discipline to responsibly use and develop AI and computing technologies to help make a better world; and
  • transform education and research in public policy and ethical considerations relevant to computing and AI.

 

 

A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

Isaiah 55:8-11 New International Version (NIV) — from biblegateway.com

“For my thoughts are not your thoughts,
    neither are your ways my ways,”
    declares the Lord.
“As the heavens are higher than the earth,
    so are my ways higher than your ways
    and my thoughts than your thoughts.

10 As the rain and the snow
    come down from heaven,
    and do not return to it
    without watering the earth
    and making it bud and flourish,
   so that it yields seed for the sower and bread for the eater,
11 so is my word that goes out from my mouth:
   It will not return to me empty,
   but will accomplish what I desire
   and achieve the purpose for which I sent it.

 

 

How MIT’s Mini Cheetah Can Help Accelerate Robotics Research — from spectrum.ieee.org by Evan Ackerman
Sangbae Kim talks to us about the new Mini Cheetah quadruped and his future plans for the robot

 

 

From DSC:
Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Isaiah 58:6-11 New International Version (NIV) — from biblegateway.com

“Is not this the kind of fasting I have chosen:
to loose the chains of injustice
    and untie the cords of the yoke,
to set the oppressed free
    and break every yoke?
Is it not to share your food with the hungry
    and to provide the poor wanderer with shelter—
when you see the naked, to clothe them,
    and not to turn away from your own flesh and blood?
Then your light will break forth like the dawn,
    and your healing will quickly appear;
then your righteousness will go before you,
    and the glory of the Lord will be your rear guard.
Then you will call, and the Lord will answer;
    you will cry for help, and he will say: Here am I.

“If you do away with the yoke of oppression,
    with the pointing finger and malicious talk,
10 and if you spend yourselves in behalf of the hungry
    and satisfy the needs of the oppressed,
then your light will rise in the darkness,
    and your night will become like the noonday.
11 The Lord will guide you always;
    he will satisfy your needs in a sun-scorched land
    and will strengthen your frame.
You will be like a well-watered garden,
    like a spring whose waters never fail.

 

 

 

Joint CS and Philosophy Initiative, Embedded EthiCS, Triples in Size to 12 Courses — from thecrimson.com by Ruth Hailu and Amy Jia

Excerpt:

The idea behind the Embedded EthiCS initiative arose three years ago after students in Grosz’s course, CS 108: “Intelligent Systems: Design and Ethical Challenges,” pushed for an increased emphasis on ethical reasoning within discussions surrounding technology, according to Grosz and Simmons. One student suggested Grosz reach out to Simmons, who also recognized the importance of an interdisciplinary approach to computer science.

“Not only are today’s students going to be designing technology in the future, but some of them are going to go into government and be working on regulation,” Simmons said. “They need to understand how [ethical issues] crop up, and they need to be able to identify them.”

 

 

India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 
© 2024 | Daniel Christian