We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

AI’s white guy problem isn’t going away — from technologyreview.com by Karen Hao
A new report says current initiatives to fix the field’s diversity crisis are too narrow and shallow to be effective.

Excerpt:

The numbers tell the tale of the AI industry’s dire lack of diversity. Women account for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. Racial diversity is even worse: black workers represent only 2.5% of Google’s entire workforce and 4% of Facebook’s and Microsoft’s. No data is available for transgender people and other gender minorities—but it’s unlikely the trend is being bucked there either.

This is deeply troubling when the influence of the industry has dramatically grown to affect everything from hiring and housing to criminal justice and the military. Along the way, the technology has automated the biases of its creators to alarming effect: devaluing women’s résumés, perpetuating employment and housing discrimination, and enshrining racist policing practices and prison convictions.

 

Along these lines, also see:

‘Disastrous’ lack of diversity in AI industry perpetuates bias, study finds — from by theguardian.com by Kari Paul
Report says an overwhelmingly white and male field has reached ‘a moment of reckoning’ over discriminatory systems

Excerpt:

Lack of diversity in the artificial intelligence field has reached “a moment of reckoning”, according to new findings published by a New York University research center. A “diversity disaster” has contributed to flawed systems that perpetuate gender and racial biases found the survey, published by the AI Now Institute, of more than 150 studies and reports.

The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

 

 

 

Addendum on 4/20/19:

Amazon is now making its delivery drivers take selfies — from theverge.com by Shannon Liao
It will then use facial recognition to double-check

From DSC:
I don’t like this piece re: Amazon’s use of facial recognition at all. Some organization like Amazon asserts that they need facial recognition to deliver services to its customers, and then, the next thing we know, facial recognition gets its foot in the door…sneaks in the back way into society’s house. By then, it’s much harder to get rid of. We end up with what’s currently happening in China. I don’t want to pay for anything with my face. Ever. As Mark Zuckerberg has demonstrated time and again, I don’t trust humankind to handle this kind of power. Plus, the developing surveillance states by several governments is a chilling thing indeed. China is using it to identify/track Muslims.

China using AI to track Muslims

Can you think of some “groups” that people might be in that could be banned from receiving goods and services? I can. 

The appalling lack of privacy that’s going on in several societies throughout the globe has got to be stopped. 

 

 

Through the legal looking glass — from lodlaw.com by Lawyers On Demand (LOD) &  Jordan Furlong

Excerpts (emphasis DSC):

But here’s the thing: Even though most lawyers’ career paths have twisted and turned and looped back in unexpected directions, the landscape over which they’ve zig-zagged these past few decades has been pretty smooth, sedate and predictable. The course of these lawyers’ careers might not have been foreseeable, but for the most part, the course of the legal profession was, and that made the twists and turns easier to navigate.

Today’s lawyers, or anyone who enters the legal profession in the coming years, probably won’t be as fortunate. The fundamental landscape of the law is being remade as we speak, and the next two decades in particular will feature upheavals and disruptions at a pace and on a scale we’ve not seen before — following and matching similar tribulations in the wider world. This report is meant to advise you of the likeliest (but by no means certain) nature and direction of the fault lines along which the legal career landscape will fracture and remake itself in the coming years. Our hope is to help you anticipate these developments and adjust your own career plans in response, on the fly if necessary.

So, before you proceed any further into this report — before you draw closer to answering the question, “Will I still want to be a lawyer tomorrow?” — you need to think about why you’re a lawyer today.

Starting within the next five years or so, we should begin to see more lawyers drawn towards fulfilling the profession’s vocational or societal role, rather than choosing to pursue a private-sector commercial path. This will happen because:

  • generational change will bring new attitudes to the profession,
  • technological advances will reduce private legal work opportunities, and
  • a series of public crises will drive more lawyers by necessity towards societal roles.


It seems likely enough, in fact, that we’re leaving the era in which law was predominantly viewed as a safe, prestigious, private career, and entering one in which law is just as often considered a challenging, self-sacrificial, public career. More lawyers will find themselves grouped with teachers, police officers, and social workers — positions that pay decently but not spectacularly, that play a difficult but critical role in the civic order. We could call this the rising career path of the civic lawyer.

But if your primary or even sole motivation for entering the law is to become a wealthy member of the financial and political elite, then we suggest you should start looking for alternatives now. These types of careers will be fewer and farther between, and we suspect they will be increasingly at odds with the emerging spirit and character of the profession.

A prediction (which they admit can be a fool’s errand):
Amazon buys LegalZoom in the US as part of its entry into the global services sector, offering discounted legal services to Prime members. Regulators’ challenges will fail, signalling the beginning of the end of lawyer control of the legal market.

 

 

Five Principles for Thinking Like a Futurist — from er.educause.edu by Marina Gorbis

Excerpt:

In 2018 we celebrated the fifty-year anniversary of the founding of the Institute for the Future (IFTF). No other futures organization has survived for this long; we’ve actually survived our own forecasts! In these five decades we learned a lot, and we still believe—even more strongly than before—that systematic thinking about the future is absolutely essential for helping people make better choices today, whether you are an individual or a member of an educational institution or government organization. We view short-termism as the greatest threat not only to organizations but to society as a whole.

In my twenty years at the Institute, I’ve developed five core principles for futures thinking:

  • Forget about predictions.
  • Focus on signals.*
  • Look back to see forward.
  • Uncover patterns.*
  • Create a community.

 

* From DSC:
I have a follow up thought regarding those bullet points about signals and patterns. With today’s exponential pace of technological change, I have asserted for several years now that our students — and all of us really — need to be skilled in pulse-checking the relevant landscapes around us. That’s why I’m a big fan of regularly tapping into — and contributing towards — streams of content. Subscribing to RSS feeds, following organizations and/or individuals on Twitter, connecting with people on LinkedIn, etc. Doing so will help us identify trends, patterns, and the signals that Marina talks about in her article.

It reminds me of the following graphic from January 2017:

 

A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

Huge study finds professors’ attitudes affect students’ grades — and it’s doubly true for minority students. — from arstechnica.com by Scott Johnson

Excerpt:

Instead, the researchers think the data suggests that—in any number of small ways—instructors who think their students’ intelligence is fixed don’t keep their students as motivated, and perhaps don’t focus as much on teaching techniques that can encourage growth. And while this affects all students, it seems to have an extra impact on underrepresented minority students.

The good news, the researchers say, is that instructors can be persuaded to adopt more of a growth mindset in their teaching through a little education of their own. That small attitude adjustment could make them a more effective teacher, to the significant benefit of a large number of students.

 

Along these lines, also see:

 


 

 

How MIT’s Mini Cheetah Can Help Accelerate Robotics Research — from spectrum.ieee.org by Evan Ackerman
Sangbae Kim talks to us about the new Mini Cheetah quadruped and his future plans for the robot

 

 

From DSC:
Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

 

 

Isaiah 58:6-11 New International Version (NIV) — from biblegateway.com

“Is not this the kind of fasting I have chosen:
to loose the chains of injustice
    and untie the cords of the yoke,
to set the oppressed free
    and break every yoke?
Is it not to share your food with the hungry
    and to provide the poor wanderer with shelter—
when you see the naked, to clothe them,
    and not to turn away from your own flesh and blood?
Then your light will break forth like the dawn,
    and your healing will quickly appear;
then your righteousness will go before you,
    and the glory of the Lord will be your rear guard.
Then you will call, and the Lord will answer;
    you will cry for help, and he will say: Here am I.

“If you do away with the yoke of oppression,
    with the pointing finger and malicious talk,
10 and if you spend yourselves in behalf of the hungry
    and satisfy the needs of the oppressed,
then your light will rise in the darkness,
    and your night will become like the noonday.
11 The Lord will guide you always;
    he will satisfy your needs in a sun-scorched land
    and will strengthen your frame.
You will be like a well-watered garden,
    like a spring whose waters never fail.

 

 

 

Joint CS and Philosophy Initiative, Embedded EthiCS, Triples in Size to 12 Courses — from thecrimson.com by Ruth Hailu and Amy Jia

Excerpt:

The idea behind the Embedded EthiCS initiative arose three years ago after students in Grosz’s course, CS 108: “Intelligent Systems: Design and Ethical Challenges,” pushed for an increased emphasis on ethical reasoning within discussions surrounding technology, according to Grosz and Simmons. One student suggested Grosz reach out to Simmons, who also recognized the importance of an interdisciplinary approach to computer science.

“Not only are today’s students going to be designing technology in the future, but some of them are going to go into government and be working on regulation,” Simmons said. “They need to understand how [ethical issues] crop up, and they need to be able to identify them.”

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian