As Thomson Reuters readies layoffs of 3,200, what’s it mean for customers? — from lawsitesblog.com by Bob Ambrogi

Excerpts:

Thomson Reuters, the dominant provider of research and information services for the legal profession, last week announced plans to reduce its workforce by 3,200 and close 30 percent of its offices by the end of 2020. What is going on and what does it mean for the company’s customers?

The overall goal, the company said, is to create a leaner, more agile organization that will allow it to better serve its customers and shift its orientation from a content company to a software company.

“As the velocity of technology change increases and the iteration cycles become ever shorter, the new Thomson Reuters needs to run leaner, be faster and more effective,” Neil T. Masterson, co-COO, told the investors. TR plans to accomplish that through three “levers” which will result in a headcount reduction of 12 percent by 2020…

 

New operating structure of Thomson Reuters

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what American has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

All automated hiring software is prone to bias by default — from technologyreview.com

Excerpt:

new report out from nonprofit Upturn analyzed some of the most prominent hiring algorithms on the market and found that by default, such algorithms are prone to bias.

The hiring steps: Algorithms have been made to automate four primary stages of the hiring process: sourcing, screening, interviewing, and selection. The analysis found that while predictive tools were rarely deployed to make that final choice on who to hire, they were commonly used throughout these stages to reject people.

 

“Because there are so many different points in that process where biases can emerge, employers should definitely proceed with caution,” says Bogen. “They should be transparent about what predictive tools they are using and take whatever steps they can to proactively detect and address biases that arise—and if they can’t confidently do that, they should pull the plug.”

 

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 
 

10 predictions for tech in 2019 — from enterprisersproject.com by Carla Rudder
IT leaders look at the road ahead and predict what’s next for containers, security, blockchain, and more

Excerpts:

We asked IT leaders and tech experts what they see on the horizon for the future of technology. We intentionally left the question open-ended, and as a result, the answers represent a broad range of what IT professionals may expect to face in the new year. Let’s dig in…

3. Security becomes must-have developer skill.
Developers who have job interviews next year will see a new question added to the usual list.

5. Ethics take center stage with tech talent
Robert Reeves, CTO and co-founder, Datical: “More companies (prompted by their employees) will become increasingly concerned about the ethics of their technology. Microsoft is raising concerns of the dangers of facial recognition technology; Google employees are very concerned about their AI products being used by the Department of Defense. The economy is good for tech right now and the job market is becoming tighter. Thus, I expect those companies to take their employees’ concerns very seriously. Of course, all bets are off when (not if) we dip into a recession. But, for 2019, be prepared for more employees of tech giants to raise ethical concerns and for those concerns to be taken seriously and addressed.”’

7. Customers expect instant satisfaction
All customers will be the customer of ‘now,’ with expectations of immediate and personalized service; single-click approval for loans, sales quotes on the spot, and deliveries in hours instead of days. The window of opportunity for customer satisfaction will keep closing and technology will evolve to keep pace. Real-time analytics will become faster and smarter as data that is external to the organization, such as social, news and weather, will be included for more insights. The move to the cloud will accelerate with the growing adoption of open-source vendors.”

 

From DSC:
Regarding #7 above…as the years progress, how do you suppose this type of environment where people expect instant satisfaction and personalized service will impact education/training?

 

 

 

GM to lay off 15 percent of salaried workers, halt production at five plants in U.S. and Canada — from washingtonpost.com by Taylor Telford

Excerpts:

Amid global restructuring, General Motors announced Monday it would reduce its North American production and salaried and executive workforce

These changes are part of GM’s efforts to focus its resources on self-driving and electric vehicles, as well as more efficient trucks, crossovers and SUVs, the company said in a statement.

The company also said it will cut 15 percent of its salaried workforce, laying off 25 percent of its executives to “streamline decision-making.” GM also said it will close two plants outside North America by the end of 2019. Those locations have yet to be announced.

 

From DSC to students:
Take note of this. If you’re heading for the corporate world (and other arenas as well these days), be ready for constant change. Always keep learning in order to stay marketable. In addition, hopefully you’ll be pulse checking the relevant landscapes along the way to minimize getting broadsided. Look for signs of what’s coming down the pike and develop some potential scenarios — and your plans/responses to those scenarios.

 

 

Why blockchain is quickly becoming the gold standard for supply chains — from datafloq.com

Excerpts:

Global supply chains are complex processes. Different companies, with distinctive objectives, are working together to achieve a common goal; to bring something from A to B. For a supply chain to work, partners have to trust each other. To do so, there are multiple checks-and-balances, extensive documents and different checkpoints all interacting in a web of bureaucratic processes. Knowing the amount of paperwork required to send a product from farm to plate, it is remarkable that we have managed to develop global supply chains.

However, the processes in place are time-consuming, expensive and they don’t always prevent growing problems such as counterfeit products, fragmentation and falsification of data, lack of transparency, extensive settlement times and incorrect storage conditions.

Especially the availability of counterfeit products is an extensive problem. Research showed that 20 out of 47 items audited from renowned retailers such as Amazon or eBay turned out to be counterfeit. This results in annual damages of $1 trillion of missed income by retailers and manufacturers.


Three use cases of improved supply chains

  1. Beiersdorf – developing an open pallet exchange
  2. Bayer – total recall of pharmaceuticals
  3. Kellogg’s – food quality and safety first

 

 

Can employees change the ethics of tech firms? — from knowledge.wharton.upenn.edu

Excerpts:

“[An] extremely important factor that tech managers now have to consider is how the ethical and moral implications of their choices affect their ability to attract and retain talent.”

“We’re in a space now where these companies are really on the hook,” said the Shorenstein Center’s Ghosh. “Regulation is coming and this whole industry is going to have to figure out a way to socialize the ideas that it has and to make decisions that are a little bit more in the public interest. That’s where this whole conversation is going. I think that they are going to have to start thinking more about what’s in it for the world, and if they don’t, other people are going to step in and decide for them.”

 

Elon Musk receives FCC approval to launch over 7,500 satellites into space — from digitaltrends.com by Kelly Hodgkins

new satellite-based network would cover the entire globe -- is that a good thing?

Excerpt (emphasis DSC):

The FCC this week unanimously approved SpaceX’s ambitious plan to launch 7,518 satellites into low-Earth orbit. These satellites, along with 4,425 previously approved satellites, will serve as the backbone for the company’s proposed Starlink broadband network. As it does with most of its projects, SpaceX is thinking big with its global broadband network. The company is expected to spend more than $10 billion to build and launch a constellation of satellites that will provide high-speed internet coverage to just about every corner of the planet.

 

To put this deployment in perspective, there are currently only 1,886 active satellites presently in orbit. These new SpaceX satellites will increase the number of active satellites six-fold in less than a decade. 

 

 

New simulation shows how Elon Musk’s internet satellite network might work — from digitaltrends.com by Luke Dormehl

Excerpt:

From Tesla to Hyperloop to plans to colonize Mars, it’s fair to say that Elon Musk thinks big. Among his many visionary ideas is the dream of building a space internet. Called Starlink, Musk’s ambition is to create a network for conveying a significant portion of internet traffic via thousands of satellites Musk hopes to have in orbit by the mid-2020s. But just how feasible is such a plan? And how do you avoid them crashing into one another?

 



 

From DSC:
Is this even the FCC’s call to make?

One one hand, such a network could be globally helpful, positive, and full of pros. But on the other hand, I wonder…what are the potential drawbacks with this proposal? Will nations across the globe launch their own networks — each of which consists of thousands of satellites?

While I love Elon’s big thinking, the nations need to weigh in on this one.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian