India Just Swore in Its First Robot Police Officer — from futurism.com by Dan Robitzski
RoboCop, meet KP-Bot.

Excerpt:

RoboCop
India just swore in its first robotic police officer, which is named KP-Bot.

The animatronic-looking machine was granted the rank of sub-inspector on Tuesday, and it will operate the front desk of Thiruvananthapuram police headquarters, according to India Today.

 

 

From DSC:
Whoa….hmmm…note to the ABA and to the legal education field — and actually to anyone involved in developing laws — we need to catch up. Quickly.

My thoughts go to the governments and to the militaries around the globe. Are we now on a slippery slope? How far along are the militaries of the world in integrating robotics and AI into their weapons of war? Quite far, I think.

Also, at the higher education level, are Computer Science and Engineering Departments taking their responsibilities seriously in this regard? What kind of teaching is being done (or not done) in terms of the moral responsibilities of their code? Their robots?

 

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

The real reason tech struggles with algorithmic bias — from wired.com by Yael Eisenstat

Excerpts:

ARE MACHINES RACIST? Are algorithms and artificial intelligence inherently prejudiced? Do Facebook, Google, and Twitter have political biases? Those answers are complicated.

But if the question is whether the tech industry doing enough to address these biases, the straightforward response is no.

Humans cannot wholly avoid bias, as countless studies and publications have shown. Insisting otherwise is an intellectually dishonest and lazy response to a very real problem.

In my six months at Facebook, where I was hired to be the head of global elections integrity ops in the company’s business integrity division, I participated in numerous discussions about the topic. I did not know anyone who intentionally wanted to incorporate bias into their work. But I also did not find anyone who actually knew what it meant to counter bias in any true and methodical way.

 

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene.

 

 

Why Facebook’s banned “Research” app was so invasive — from wired.com by Louise Matsakislo

Excerpts:

Facebook reportedly paid users between the ages of 13 and 35 $20 a month to download the app through beta-testing companies like Applause, BetaBound, and uTest.


Apple typically doesn’t allow app developers to go around the App Store, but its enterprise program is one exception. It’s what allows companies to create custom apps not meant to be downloaded publicly, like an iPad app for signing guests into a corporate office. But Facebook used this program for a consumer research app, which Apple says violates its rules. “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple,” a spokesperson said in a statement. “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Facebook didn’t respond to a request for comment.

Facebook needed to bypass Apple’s usual policies because its Research app is particularly invasive. First, it requires users to install what is known as a “root certificate.” This lets Facebook look at much of your browsing history and other network data, even if it’s encrypted. The certificate is like a shape-shifting passport—with it, Facebook can pretend to be almost anyone it wants.

To use a nondigital analogy, Facebook not only intercepted every letter participants sent and received, it also had the ability to open and read them. All for $20 a month!

Facebook’s latest privacy scandal is a good reminder to be wary of mobile apps that aren’t available for download in official app stores. It’s easy to overlook how much of your information might be collected, or to accidentally install a malicious version of Fortnite, for instance. VPNs can be great privacy tools, but many free ones sell their users’ data in order to make money. Before downloading anything, especially an app that promises to earn you some extra cash, it’s always worth taking another look at the risks involved.

 

LinkedIn 2019 Talent Trends: Soft Skills, Transparency and Trust — from linkedin.com by Josh Bersin

Excerpts:

This week LinkedIn released its 2019 Global Talent Trends research, a study that summarizes job and hiring data across millions of people, and the results are quite interesting. (5,165 talent and managers responded, a big sample.)

In an era when automation, AI, and technology has become more pervasive, important (and frightening) than ever, the big issue companies face is about people: how we find and develop soft skills, how we create fairness and transparency, and how we make the workplace more flexible, humane, and honest.

The most interesting part of this research is a simple fact: in today’s world of software engineering and ever-more technology, it’s soft skills that employers want. 91% of companies cited this as an issue and 80% of companies are struggling to find better soft skills in the market.

What is a “soft skill?” The term goes back twenty years when we had “hard skills” (engineering and science) so we threw everything else into the category of “soft.” In reality soft skills are all the human skills we have in teamwork, leadership, collaboration, communication, creativity, and person to person service. It’s easy to “teach” hard skills, but soft skills must be “learned.”

 

 

Also see:

Employers Want ‘Uniquely Human Skills’ — from campustechnology.com by Dian Schaffhauser

Excerpt:

According to 502 hiring managers and 150 HR decision-makers, the top skills they’re hunting for among new hires are:

  • The ability to listen (74 percent);
  • Attention to detail and attentiveness (70 percent);
  • Effective communication (69 percent);
  • Critical thinking (67 percent);
  • Strong interpersonal abilities (65 percent); and
  • Being able to keep learning (65 percent).
 

The information below is per Laura Kelley (w/ Page 1 Solutions)


As you know, Apple has shut down Facebook’s ability to distribute internal iOS apps. The shutdown comes following news that Facebook has been using Apple’s program for internal app distribution to track teenage customers for “research.”

Dan Goldstein is the president and owner of Page 1 Solutions, a full-service digital marketing agency. He manages the needs of clients along with the need to ensure protection of their consumers, which has become one of the top concerns from clients over the last year. Goldstein is also a former attorney so he balances the marketing side with the legal side when it comes to protection for both companies and their consumers. He says while this is another blow for Facebook, it speaks volumes for Apple and its concern for consumers,

“Facebook continues to demonstrate that it does not value user privacy. The most disturbing thing about this news is that Facebook knew that its app violated Apples terms of service and continued to distribute the app to consumers after it was banned from the App Store. This shows, once again, that Facebook doesn’t value user privacy and goes to great lengths to collect private behavioral data to give it a competitive advantage.The FTC is already investigating Facebook’s privacy policies and practices.As Facebook’s efforts to collect and use private data continue to be exposed, it risks losing market share and may prompt additional governmental investigations and regulation,” Goldstein says.

“One positive that comes out of this story is that Apple seems to be taking a harder line on protecting user privacy than other tech companies. Apple has been making noises about protecting user privacy for several months. This action indicates that it is attempting to follow through on its promises,” Goldstein says.

 

 

Amazon is pushing facial technology that a study says could be biased — from nytimes.com by Natasha Singer
In new tests, Amazon’s system had more difficulty identifying the gender of female and darker-skinned faces than similar services from IBM and Microsoft.

Excerpt:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

 

 

Facebook’s ’10 year’ challenge is just a harmless meme — right? — from wired.com by Kate O’Neill

Excerpts:

But the technology raises major privacy concerns; the police could use the technology not only to track people who are suspected of having committed crimes, but also people who are not committing crimes, such as protesters and others whom the police deem a nuisance.

It’s tough to overstate the fullness of how technology stands to impact humanity. The opportunity exists for us to make it better, but to do that we also must recognize some of the ways in which it can get worse. Once we understand the issues, it’s up to all of us to weigh in.

 

The five most important new jobs in AI, according to KPMG — from qz.com by Cassie Werber

Excerpt:

Perhaps as a counter to the panic that artificial intelligence will destroy jobs, consulting firm KPMG published a list (on 1/8/19) of what it predicts will soon become the five most sought-after AI roles. The predictions are based on the company’s own projects and those on which it advises. They are:

  • AI Architect – Responsible for working out where AI can help a business, measuring performance and—crucially— “sustaining the AI model over time.” Lack of architects “is a big reason why companies cannot successfully sustain AI initiatives,” KMPG notes.
  • AI Product Manager – Liaises between teams, making sure ideas can be implemented, especially at scale. Works closely with architects, and with human resources departments to make sure humans and machines can all work effectively.
  • Data Scientist – Manages the huge amounts of available data and designs algorithms to make it meaningful.
  • AI Technology Software Engineer – “One of the biggest problems facing businesses is getting AI from pilot phase to scalable deployment,” KMPG writes. Software engineers need to be able both to build scalable technology and understand how AI actually works.
  • AI Ethicist – AI presents a host of ethical challenges which will continue to unfold as the technology develops. Creating guidelines and ensuring they’re upheld will increasingly become a full-time job.

 

While it’s all very well to list the jobs people should be training and hiring for, it’s another matter to actually create a pipeline of people ready to enter those roles. Brad Fisher, KPMG’s US lead on data and analytics and the lead author of the predictions, tells Quartz there aren’t enough people getting ready for these roles.

 

Fisher has a steer for those who are eyeing AI jobs but have yet to choose an academic path: business process skills can be “trained,” he said, but “there is no substitute for the deep technical skillsets, such as mathematics, econometrics, or computer science, which would prepare someone to be a data scientist or a big-data software engineer.”

 

From DSC:
I don’t think institutions of higher education (as well as several other types of institutions in our society) are recognizing that the pace of technological change has changed, and that there are significant ramifications to those changes upon society. And if these institutions have picked up on it, you can hardly tell. We simply aren’t used to this pace of change.

Technologies change quickly. People change slowly. And, by the way, that is not a comment on how old someone is…change is hard at almost any age.

 

 

 

 

 

Ten HR trends in the age of artificial intelligence — from fortune.com by Jeanne Meister
The future of HR is both digital and human as HR leaders focus on optimizing the combination of human and automated work. This is driving a new HR priority: requiring leaders and teams to develop fluency in artificial intelligence while they re-imagine HR to be more personal, human, and intuitive.

Excerpt from 21 More Jobs Of the Future (emphasis DSC):

Voice UX Designer: This role will leverage voice as a platform to deliver an “optimal” dialect and sound that is pleasing to each of the seven billion humans on the planet. The Voice UX Designer will do this by creating a set of AI tools and algorithms to help individuals find their “perfect voice” assistant.

Head of Business Behavior: The head of business behavior will analyze employee behavioral data such as performance data along with data gathered through personal, environmental and spatial sensors to create strategies to improve employee experience, cross company collaboration, productivity and employee well-being.

The question for HR leaders is: What are new job roles in HR that are on the horizon as A.I. becomes integrated into the workplace?

Chief Ethical and Humane Use Officer: This job role is already being filled by Salesforce announcing its first Chief Ethical and Humane Officer this month. This new role will focus on developing strategies to use technology in an ethical and humane way. As practical uses of AI have exploded in recent years, we look for more companies to establish new jobs focusing on ethical uses of AI to ensure AI’s trustworthiness, while also helping to diffuse fears about it.

A.I. Trainer: This role allows the existing knowledge you have about a job to be ready for A.I. to use.  Creating knowledge for an A.I. supported workplace requires individuals to tag or “annotate” discrete knowledge nuggets so the correct data is served up in a conversational interface. This role is increasingly important as the role of a recruiter is augmented by AI.

 

 

Also see:

  • Experts Weigh in on Merits of AI in Education — from by Dian Schaffhauser
    Excerpt:
    Will artificial intelligence make most people better off over the next decade, or will it redefine what free will means or what a human being is? A new report by the Pew Research Center has weighed in on the topic by conferring with some 979 experts, who have, in summary, predicted that networked AI “will amplify human effectiveness but also threaten human autonomy, agency and capabilities.”

    These same experts also weighed in on the expected changes in formal and informal education systems. Many mentioned seeing “more options for affordable adaptive and individualized learning solutions,” such as the use of AI assistants to enhance learning activities and their effectiveness.

 

 

Google, Facebook, and the Legal Mess Over Face Scanning — finance.yahoo.com by John Jeff Roberts

Excerpt:

When must companies receive permission to use biometric data like your fingerprints or your face? The question is a hot topic in Illinois where a controversial law has ensnared tech giants Facebook and Google, potentially exposing them to billions in dollars in liability over their facial recognition tools.

The lack of specific guidance from the Supreme Court has since produced ongoing confusion over what type of privacy violations can let people seek financial damages.

 

Also see:

 

 

From DSC:
The legal and legislative areas need to close the gap between emerging technologies and the law.

What questions should we be asking about the skillsets that our current and future legislative representatives need? Do we need some of our representatives to be highly knowledgeable, technically speaking? 

What programs and other types of resources should we be offering our representatives to get up to speed on emerging technologies? Which blogs, websites, journals, e-newsletters, listservs, and/or other communication vehicles and/or resources should they have access to?

Along these lines, what about our judges? Can we offer them some of these resources as well? 

What changes do our law schools need to make to address this?

 

 

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

Why should anyone believe Facebook anymore? — from wired.com by Fred Vogelstein

Excerpt:

Just since the end of September, Facebook announced the biggest security breach in its history, affecting more than 30 million accounts. Meanwhile, investigations in November revealed that, among other things, the company had hired a Washington firm to spread its own brand of misinformation on other platforms, including borderline anti-Semitic stories about financier George Soros. Just two weeks ago, a cache of internal emails dating back to 2012 revealed that at times Facebook thought a lot more about how to make money off users’ data than about how to protect it.

Now, according to a New York Times investigation into Facebook’s data practices published Tuesday, long after Facebook said it had taken steps to protect user data from the kinds of leakages that made Cambridge Analytica possible, the company continued to sustain special, undisclosed data-sharing arrangements with more than 150 companies—some into this year. Unlike with Cambridge Analytica, the Times says, Facebook provided access to its users’ data knowingly and on a greater scale.

 

What has enabled them to deliver these apologies, year after year, was that these sycophantic monologues were always true enough to be believable. The Times’ story calls into question every one of those apologies—especially the ones issued this year.

There’s a simple takeaway from all this, and it’s not a pretty one: Facebook is either a mendacious, arrogant corporation in the mold of a 1980s-style Wall Street firm, or it is a company in much more disarray than it has been letting on. 

It’s hard to process this without finally realizing what it is that’s made us so angry with Silicon Valley, and Facebook in particular, in 2018: We feel lied to, like these companies are playing us, their users, for chumps, and they’re also laughing at us for being so naive.

 

 

Also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt:

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

Addendum on 12/27/18: — also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt (emphasis DSC):

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 
© 2024 | Daniel Christian