A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

Is this South Africa’s best legal online platform using blockchain? — from techfinancials.co.za
The winners of the event, Kagiso, will progress to the second round of the contest, in which a panel of international judges will decide who attends a grand final in New York.

Excerpt:

The Hague Institute for Innovation of Law (HiiL) and leading global law firm Baker McKenziehave announced the winners of the South African leg of Global Legal Hackathon 2019 (GLH2019).

First prize went to Kagiso, an online mediation platform that provides a cost-effective and fast alternative to lengthy court processes for civil disputes.

Kagiso uses machine learning to match cases with professional mediators who have the most relevant skill sets to be effective – such as subject matter experience or knowledge of local languages – and stores records using blockchain technology.

The second prize was awarded to Bua, a voice-recognition system that allows victims of crime to record their own statements in their own language in a private “safe space” such as a kiosk or on their own phone.

The majority of crimes in South Africa’s go unreported or prosecutions fail, and a leading reason is that victims don’t feel comfortable giving statements in open police stations, and statements are often badly or wilfully mistranslated.

 

 

The Global Legal Hackathon is a non-profit organization that organizes law schools, law firms and in-house departments, legal technology companies, governments, and service providers to innovation in the legal industry – across the globe. It brings together the best thinkers, doers and practitioners in law in support of a unified vision: rapid development of solutions to improve the legal industry, world-wide.

 

 

From DSC:
Glancing through the awards likely shows where the future of the legal field is going…at least in part.

 

Also see:

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

 

C-Level View | Feature:
Technology Change: Closing the Knowledge Gap — A Q&A with Mary Grush & Daniel Christian

Excerpts:

Technology changes quickly. People change slowly. The rate of technology change often outpaces our ability to understand it.

It has caused a gap between what’s possible and what’s legal. For example, facial recognition seems to be starting to show up all around us — that’s what’s possible. But what’s legal?

The overarching questions are: What do we really want from these technologies? What kind of future do we want to live in?

Those law schools that expand their understanding of emerging technologies and lead the field in the exploration of related legal issues will achieve greater national prominence.

Daniel Christian

 

 

 

For a next gen learning platform: A Netflix-like interface to check out potential functionalities / educationally-related “apps” [Christian]

From DSC:
In a next generation learning system, it would be sharp/beneficial to have a Netflix-like interface to check out potential functionalities that you could turn on and off (at will) — as one component of your learning ecosystem that could feature a setup located in your living room or office.

For example, put a Netflix-like interface to the apps out at eduappcenter.com (i.e., using a rolling interface at first, then going to a static page/listing of apps…again…similar to Netflix).

 

A Netflix-like interface to check out potential functionalities / educationally-related apps

 

 

 

Accenture Technology Vision 2019: The post-digital era is upon us — from accenture.com

In brief

  • Digital transformation grants companies exceptional capabilities. But it also creates enormous expectations.
  • Amid these rising expectations, every business is investing in digital technologies, raising the question of how leaders will set themselves apart.
  • Companies looking to differentiate themselves must be aware of five distinct trends that will characterize the “post-digital” future.

 

 

Here is the link for the report.

 

 

 

 

Emerging technology trends can seem both elusive and ephemeral, but some become integral to business and IT strategies—and form the backbone of tomorrow’s technology innovation. The eight chapters of Tech Trends 2019 look to guide CIOs through today’s most promising trends, with an eye toward innovation and growth and a spotlight on emerging trends that may well offer new avenues for pursuing strategic ambitions.

 

 

Amazon is pushing facial technology that a study says could be biased — from nytimes.com by Natasha Singer
In new tests, Amazon’s system had more difficulty identifying the gender of female and darker-skinned faces than similar services from IBM and Microsoft.

Excerpt:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian