A Chinese subway is experimenting with facial recognition to pay for fares — from theverge.com by Shannon Liao

Excerpt:

Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post.

The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account.

 

 

From DSC:
I don’t want this type of thing here in the United States. But…now what do I do? What about you? What can we do? What paths are open to us to stop this?

I would argue that the new, developing, technological “Wild Wests” in many societies throughout the globe could be dangerous to our futures. Why? Because the pace of change has changed. And these new Wild Wests now have emerging, powerful, ever-more invasive (i.e., privacy-stealing) technologies to deal with — the likes of which the world has never seen or encountered before. With this new, rapid pace of change, societies aren’t able to keep up.

And who is going to use the data? Governments? Large tech companies? Other?

Don’t get me wrong, I’m generally pro-technology. But this new pace of change could wreak havoc on us. We need time to weigh in on these emerging techs.

 

Addendum on 3/20/19:

  • Chinese Facial Recognition Database Exposes 2.5 Million People — from futurumresearch.com by Shelly Kramer
    Excerpt:
    An artificial intelligence company operating a facial recognition system in China recently left its database exposed online, leaving the personal information of some 2.5 million Chinese citizens vulnerable. Considering how much the Chinese government relies on facial recognition technology, this is a big deal—for both the Chinese government and Chinese citizens.

 

 

Huge study finds professors’ attitudes affect students’ grades — and it’s doubly true for minority students. — from arstechnica.com by Scott Johnson

Excerpt:

Instead, the researchers think the data suggests that—in any number of small ways—instructors who think their students’ intelligence is fixed don’t keep their students as motivated, and perhaps don’t focus as much on teaching techniques that can encourage growth. And while this affects all students, it seems to have an extra impact on underrepresented minority students.

The good news, the researchers say, is that instructors can be persuaded to adopt more of a growth mindset in their teaching through a little education of their own. That small attitude adjustment could make them a more effective teacher, to the significant benefit of a large number of students.

 

Along these lines, also see:

 


 

 

How MIT’s Mini Cheetah Can Help Accelerate Robotics Research — from spectrum.ieee.org by Evan Ackerman
Sangbae Kim talks to us about the new Mini Cheetah quadruped and his future plans for the robot

 

 

From DSC:
Sorry, but while the video/robot is incredible, a feeling in the pit of my stomach makes me reflect upon what’s likely happening along these lines in the militaries throughout the globe…I don’t mean to be a fear monger, but rather a realist.

 

 

Isaiah 58:6-11 New International Version (NIV) — from biblegateway.com

“Is not this the kind of fasting I have chosen:
to loose the chains of injustice
    and untie the cords of the yoke,
to set the oppressed free
    and break every yoke?
Is it not to share your food with the hungry
    and to provide the poor wanderer with shelter—
when you see the naked, to clothe them,
    and not to turn away from your own flesh and blood?
Then your light will break forth like the dawn,
    and your healing will quickly appear;
then your righteousness will go before you,
    and the glory of the Lord will be your rear guard.
Then you will call, and the Lord will answer;
    you will cry for help, and he will say: Here am I.

“If you do away with the yoke of oppression,
    with the pointing finger and malicious talk,
10 and if you spend yourselves in behalf of the hungry
    and satisfy the needs of the oppressed,
then your light will rise in the darkness,
    and your night will become like the noonday.
11 The Lord will guide you always;
    he will satisfy your needs in a sun-scorched land
    and will strengthen your frame.
You will be like a well-watered garden,
    like a spring whose waters never fail.

 

 

 

Joint CS and Philosophy Initiative, Embedded EthiCS, Triples in Size to 12 Courses — from thecrimson.com by Ruth Hailu and Amy Jia

Excerpt:

The idea behind the Embedded EthiCS initiative arose three years ago after students in Grosz’s course, CS 108: “Intelligent Systems: Design and Ethical Challenges,” pushed for an increased emphasis on ethical reasoning within discussions surrounding technology, according to Grosz and Simmons. One student suggested Grosz reach out to Simmons, who also recognized the importance of an interdisciplinary approach to computer science.

“Not only are today’s students going to be designing technology in the future, but some of them are going to go into government and be working on regulation,” Simmons said. “They need to understand how [ethical issues] crop up, and they need to be able to identify them.”

 

 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Towards a Reskilling Revolution: Industry-Led Action for the Future of Work — from weforum.org

As the Fourth Industrial Revolution impacts skills, tasks and jobs, there is growing concern that both job displacement and talent shortages will impact business dynamism and societal cohesion. A proactive and strategic effort is needed on the part of all relevant stakeholders to manage reskilling and upskilling to mitigate against both job losses and talent shortages.

Through the Preparing for the Future of Work project, the World Economic Forum provides a platform for designing and implementing intra-industry collaboration on the future of work, working closely with the public sector, unions and educators. The output of the project’s first phase of work, Towards a Reskilling Revolution: A Future of Jobs for All, highlighted an innovative method to identify viable and desirable job transition pathways for disrupted workers. This second report, Towards a Reskilling Revolution: Industry-Led Action for the Future of Work extends our previous research to assess the business case for reskilling and establish its magnitude for different stakeholders. It also outlines a roadmap for selected industries to address specific challenges and opportunities related to the transformation of their workforce.

 

See the PDF file / report here.

 

 

 

 

Facebook’s ’10 year’ challenge is just a harmless meme — right? — from wired.com by Kate O’Neill

Excerpts:

But the technology raises major privacy concerns; the police could use the technology not only to track people who are suspected of having committed crimes, but also people who are not committing crimes, such as protesters and others whom the police deem a nuisance.

It’s tough to overstate the fullness of how technology stands to impact humanity. The opportunity exists for us to make it better, but to do that we also must recognize some of the ways in which it can get worse. Once we understand the issues, it’s up to all of us to weigh in.

 

The five most important new jobs in AI, according to KPMG — from qz.com by Cassie Werber

Excerpt:

Perhaps as a counter to the panic that artificial intelligence will destroy jobs, consulting firm KPMG published a list (on 1/8/19) of what it predicts will soon become the five most sought-after AI roles. The predictions are based on the company’s own projects and those on which it advises. They are:

  • AI Architect – Responsible for working out where AI can help a business, measuring performance and—crucially— “sustaining the AI model over time.” Lack of architects “is a big reason why companies cannot successfully sustain AI initiatives,” KMPG notes.
  • AI Product Manager – Liaises between teams, making sure ideas can be implemented, especially at scale. Works closely with architects, and with human resources departments to make sure humans and machines can all work effectively.
  • Data Scientist – Manages the huge amounts of available data and designs algorithms to make it meaningful.
  • AI Technology Software Engineer – “One of the biggest problems facing businesses is getting AI from pilot phase to scalable deployment,” KMPG writes. Software engineers need to be able both to build scalable technology and understand how AI actually works.
  • AI Ethicist – AI presents a host of ethical challenges which will continue to unfold as the technology develops. Creating guidelines and ensuring they’re upheld will increasingly become a full-time job.

 

While it’s all very well to list the jobs people should be training and hiring for, it’s another matter to actually create a pipeline of people ready to enter those roles. Brad Fisher, KPMG’s US lead on data and analytics and the lead author of the predictions, tells Quartz there aren’t enough people getting ready for these roles.

 

Fisher has a steer for those who are eyeing AI jobs but have yet to choose an academic path: business process skills can be “trained,” he said, but “there is no substitute for the deep technical skillsets, such as mathematics, econometrics, or computer science, which would prepare someone to be a data scientist or a big-data software engineer.”

 

From DSC:
I don’t think institutions of higher education (as well as several other types of institutions in our society) are recognizing that the pace of technological change has changed, and that there are significant ramifications to those changes upon society. And if these institutions have picked up on it, you can hardly tell. We simply aren’t used to this pace of change.

Technologies change quickly. People change slowly. And, by the way, that is not a comment on how old someone is…change is hard at almost any age.

 

 

 

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

5 questions we should be asking about automation and jobs — from hbr.org by Jed Kolko

Excerpts:

  1. Will workers whose jobs are automated be able to transition to new jobs?*
  2. Who will bear the burden of automation?
  3. How will automation affect the supply of labor?
  4. How will automation affect wages, and how will wages affect automation?
  5. How will automation change job searching?

 

From DSC:
For those Economics profs and students out there, I’m posted this with you in mind; also highly applicable and relevant to MBA programs.

* I would add a few follow-up questions to question #1 above:

  • To which jobs should they transition to?
  • Who can help identify the jobs that might be safe for 5-10 years?
  • If you have a family to feed, how are you going to be able to reinvent yourself quickly and as efficiently/flexibly as possible? (Yes…constant, online-based learning comes to my mind as well, as campus-based education is great, but very time-consuming.)

 

Also see:

We Still Don’t Know Much About the Jobs the AI Economy Will Make — or Take — from medium.com by Rachel Metz with MIT Technology Review
Experts think companies need to invest in workers the way they do for other core aspects of their business they’re looking to future-proof

One big problem that could have lasting effects, she thinks, is a mismatch between the skills companies need in new employees and those that employees have or know that they can readily acquire. To fix this, she said, companies need to start investing in their workers the way they do their supply chains.

 

Per LinkedIn:

Putting robots to work is becoming more and more popularparticularly in Europe. According to the European Bank for Reconstruction and Development, Slovakian workers face a 62% median probability that their job will be automated “in the near future.” Workers in Eastern Europe face the biggest likelihood of having their jobs overtaken by machines, with the textile, agriculture and manufacturing industries seen as the most vulnerable. • Here’s what people are saying.

 

Robot Ready: Human+ Skills for the Future of Work — from economicmodeling.com

Key Findings

In Robot-Ready, we examine several striking insights:

1. Human skills—like leadership, communication, and problem solving—are among the most in-demand skills in the labor market.

2. Human skills are applied differently across career fields. To be effective, liberal arts grads must adapt their skills to the job at hand.

3. Liberal art grads should add technical skills. There is considerable demand for workers who complement their human skills with basic technical skills like data analysis and digital fluency.

4. Human+ skills are at work in a variety of fields. Human skills help liberal arts grads thrive in many career areas, including marketing, public relations, technology, and sales.

 

 

 

From DSC:
Not too long ago, I really enjoyed watching a program on PBS regarding America’s 100 most-loved books, entitled, “The Great American Read.”

 

Watch “The Grand Finale”

 

While that’s not the show I’m talking about, it got me to thinking of one similar to it — something educational, yet entertaining. But also, something more.

The program that came to my mind would be a program that’s focused on significant topics and issues within American society — offered up in a debate/presentation style format. 

For example, you could have different individuals, groups, or organizations discuss the pros and cons of an issue or topic. The show would provide contact information for helpful resources, groups, organizations, legislators, etc.  These contacts would be for learning more about a subject or getting involved with finding a solution for that problem.

For example, how about this for a potential topic: Grades or no grades?
  • What are the pros and cons of using an A-F grading system?
  • What are the benefits and issues/drawbacks with using grades? 
  • How are we truly using grades Do we use them to rank and compare individuals, schools, school systems, communities? Do we use them to “weed people out” of a program?
  • With our current systems, what “product” do we get? Do we produce game-players or people who enjoy learning? (Apologies for some of my bias showing up here! But my son has become a major game-player and, likely, so did I at his age.)
  • How do grades jibe with Individualized Education Programs (IEPs)? On one hand…how do you keep someone moving forward, staying positive, and trying to keep learning/school enjoyable yet on the other hand, how do you have those grades mean something to those who obtain data to rank school systems, communities, colleges, programs, etc.?
  • How do grades impact one’s desire to learn throughout one’s lifetime?

Such debates could be watched by students and then they could have their own debates on subjects that they propose.

Or the show could have journalists debate college or high school teams. The format could sometimes involve professors and deans debating against researchers. Or practitioners/teachers debating against researchers/cognitive psychologists. 

Such a show could be entertaining, yet highly applicable and educational. We would probably all learn something. And perhaps have our eyes opened up to a new perspective on an issue.

Or better yet, we might actually resolve some more issues and then move on to address other ones!

 

 

5 things you will see in the future “smart city” — from interestingengineering.com by Taylor Donovan Barnett
The Smart City is on the horizon and here are some of the crucial technologies part of it.

5 Things You Will See in the Future of the Smart City

Excerpt:

A New Framework: The Smart City
So, what exactly is a smart city? A smart city is an urban center that hosts a wide range of digital technology across its ecosystem. However, smart cities go far beyond just this definition.

Smart cities use technology to better population’s living experiences, operating as one big data-driven ecosystem.

The smart city uses that data from the people, vehicles, buildings etc. to not only improve citizens lives but also minimize the environmental impact of the city itself, constantly communicating with itself to maximize efficiency.

So what are some of the crucial components of the future smart city? Here is what you should know.

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

Addendum on 12/27/18: — also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt (emphasis DSC):

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 
© 2024 | Daniel Christian