YouTube’s algorithm hacked a human vulnerability, setting a dangerous precedent — from which-50.com by Andrew Birmingham

Excerpt (emphasis DSC):

Even as YouTube’s recommendation algorithm was rolled out with great fanfare, the fuse was already burning. A project of The Google Brain and designed to optimise engagement, it did something unforeseen — and potentially dangerous.

Today, we are all living with the consequences.

As Zeynep Tufekci, an associate professor at the University of North Carolina, explained to attendees of Hitachi Vantara’s Next 2019 conference in Las Vegas this week, “What the developers did not understand at the time is that YouTube’ algorithm had discovered a human vulnerability. And it was using this [vulnerability] at scale to increase YouTube’s engagement time — without a single engineer thinking, ‘is this what we should be doing?’”

 

The consequence of the vulnerability — a natural human tendency to engage with edgier ideas — led to YouTube’s users being exposed to increasingly extreme content, irrespective of their preferred areas of interest.

“What they had done was use machine learning to increase watch time. But what the machine learning system had done was to discover a human vulnerability. And that human vulnerability is that things that are slightly edgier are more attractive and more interesting.”

 

From DSC:
Just because we can…

 

 

Three threats posed by deepfakes that technology won’t solve — from technologyreview.com by Angela Chen
As deepfakes get better, companies are rushing to develop technology to detect them. But little of their potential harm will be fixed without social and legal solutions.

Excerpt:

3) Problem: Deepfake detection is too late to help victims
With deepfakes, “there’s little real recourse after that video or audio is out,” says Franks, the University of Miami scholar.

Existing laws are inadequate. Laws that punish sharing legitimate private information like medical records don’t apply to false but damaging videos. Laws against impersonation are “oddly limited,” Franks says—they focus on making it illegal to impersonate a doctor or government official. Defamation laws only address false representations that portray the subject negatively, but Franks says we should be worried about deepfakes that falsely portray people in a positive light too.

 

The blinding of justice: Technology, journalism and the law — from thehill.com by Kristian Hammond and Daniel Rodriguez

Excerpts:

The legal profession is in the early stages of a fundamental transformation driven by an entirely new breed of intelligent technologies and it is a perilous place for the profession to be.

If the needs of the law guide the ways in which the new technologies are put into use they can greatly advance the cause of justice. If not, the result may well be profits for those who design and sell the technologies but a legal system that is significantly less just.

We are entering an era of technology that goes well beyond the web. The law is seeing the emergence of systems based on analytics and cognitive computing in areas that until now have been largely immune to the impact of technology. These systems can predict, advise, argue and write and they are entering the world of legal reasoning and decision making.

Unfortunately, while systems built on the foundation of historical data and predictive analytics are powerful, they are also prone to bias and can provide advice that is based on incomplete or imbalanced data.

We are not arguing against the development of such technologies. The key question is who will guide them. The transformation of the field is in its early stages. There is still opportunity to ensure that the best intentions of the law are built into these powerful new systems so that they augment and aid rather than simply replace.

 

From DSC:
This is where we need more collaborations between those who know the law and those who know how to program, as well as other types of technologists.

 

UPS just beat out Amazon and Google to become America’s first nationwide drone airline — from businessinsider.com by Rachel Premack

Key points:

  • The US Department of Transportation said Tuesday it granted its first full Part 135 certification for a drone airline to UPS.
  • UPS currently conducts drone deliveries at a large hospital in Raleigh, North Carolina.
  • It will now be able to operate drones anywhere in the country — an industry first.
  • Another drone operator — Wing, owned by Google’s parent company Alphabet — also has Part 135 certification. But the scope of its operation is limited to Christiansburg, Virginia, about 210 miles southwest of the state capitol Richmond.

From DSC:
Add to that, these delivery bots, drones, pods, and more:

 

From DSC:
I wonder…will we be able to take a quiet walk in the future? That may not be the case if the building of these armies of drones continues — and becomes a full-fledged trend.

 

Microsoft President: Democracy Is At Stake. Regulate Big Tech — from npr.org by Aarti Shahani

Excerpts:

Regulate us. That’s the unexpected message from one of the country’s leading tech executives. Microsoft President Brad Smith argues that governments need to put some “guardrails” around engineers and the tech titans they serve.

If public leaders don’t, he says, the Internet giants will cannibalize the very fabric of this country.

“We need to work together; we need to work with governments to protect, frankly, something that is far more important than technology: democracy. It was here before us. It needs to be here and healthy after us,” Smith says.

“Almost no technology has gone so entirely unregulated, for so long, as digital technology,” Smith says.

 

Uh-oh: Silicon Valley is building a Chinese-style social credit system — from fastcompany.com by Mike Elgan
In China, scoring citizens’ behavior is official government policy. U.S. companies are increasingly doing something similar, outside the law.

Excerpts (emphasis DSC):

Have you heard about China’s social credit system? It’s a technology-enabled, surveillance-based nationwide program designed to nudge citizens toward better behavior. The ultimate goal is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step,” according to the Chinese government.

In place since 2014, the social credit system is a work in progress that could evolve by next year into a single, nationwide point system for all Chinese citizens, akin to a financial credit score. It aims to punish for transgressions that can include membership in or support for the Falun Gong or Tibetan Buddhism, failure to pay debts, excessive video gaming, criticizing the government, late payments, failing to sweep the sidewalk in front of your store or house, smoking or playing loud music on trains, jaywalking, and other actions deemed illegal or unacceptable by the Chinese government.

IT CAN HAPPEN HERE
Many Westerners are disturbed by what they read about China’s social credit system. But such systems, it turns out, are not unique to China. A parallel system is developing in the United States, in part as the result of Silicon Valley and technology-industry user policies, and in part by surveillance of social media activity by private companies.

Here are some of the elements of America’s growing social credit system.

 

If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy.

 

From DSC:
Who’s to say what gains a citizen points and what subtracts from their score? If one believes a certain thing, is that a plus or a minus? And what might be tied to someone’s score? The ability to obtain food? Medicine/healthcare? Clothing? Social Security payments? Other?

We are giving a huge amount of power to a handful of corporations…trust comes into play…at least for me. Even internally, the big tech co’s seem to be struggling as to the ethical ramifications of what they’re working on (in a variety of areas). 

Is the stage being set for a “Person of Interest” Version 2.0?

 

This state is expected to become the first to collect prosecutor data, with breakdowns by race — from abajournal.com by Debra Cassens Weiss

Excerpt:

Connecticut is expected to become the first state to collect statewide criminal case data from prosecutors broken down by the defendants’ race, sex, ethnicity, age and ZIP code.

The bill requires the state to collect statistics on arrests, diversionary programs, case dispositions, plea agreements, cases going to trial, court fines and fees, and restitution orders.

Lamont said the bill will provide the public with greater insight into prosecutors’ decisions. “

 

 

[ABA] Council enacts new bar passage standard for law schools — from americanbar.org

Excerpt (emphasis DSC):

On May 17, the Council of the ABA Section of Legal Education and Admissions to the Bar approved a major change in the bar passage standard, known as 316, that would require 75 percent of a law school’s graduates who sit for the bar to pass it within two years. The change takes effect immediately although schools falling short of the standard would have at least two years to come into compliance.

Twice since 2017, the ABA policy-making House of Delegates has voted against the change, as some delegates feared it would have an adverse effect on law schools with significant minority enrollment. But under ABA rules and procedures, the Council, which is recognized by the U.S. Department of Education as the national accreditor of law schools, has the final say on accreditation matters.

 

Also see:

  • ABA’s Tougher Bar Pass Rule for Law Schools Applauded, Derided — from law.com by Karen Sloan
    The American Bar Association’s new standard could increase pressure on jurisdictions like California with high cut scores to lower that threshold. It could also add momentum to the burgeoning movement to overhaul the bar exam itself.

“Either the ABA Council simply ignored the clear empirical evidence that the new bar standard will decrease diversity in the bar, or it passed the new standard with the hope that states, like California, that have unreasonably high bar cut scores will lower those metrics in order to ameliorate the council’s action,” Patton said.

 

At that January meeting, former ABA President Paulette Brown, the first African-American woman to hold that position, called the proposed change “draconian.”

“I know and understand fully that the [ABA] council has the right to ignore what we say,” she said. “That does not absolve us of our responsibility to give them a very clear and strong message that we will not idly stand by while they decimate the diversity in the legal profession.”

 

 

7 Things You Should Know About Accessibility Policy — from library.educause.edu

Excerpt:

Websites from the Accessible Technology Initiative (ATI) of the California State University, Penn State, the University of Virginia, and the Web Accessibility Initiative feature rich content related to IT accessibility policies. A California State University memorandum outlines specific responsibilities and reporting guidelines in support of CSU’s Policy on Disability Support and Accommodations. Cornell University developed a multiyear “Disability Access Management Strategic Plan.” Specific examples of accessibility policies focused on electronic communication and information technology can be found at Penn State, Purdue University, Yale University, and the University of Wisconsin– Madison. Having entered into a voluntary agreement with the National Federation of the Blind to improve accessibility, Wichita State University offers substantial accessibility-related resources for its community, including specific standards for ensuring accessibility in face-to face instruction.

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 

The world is changing. Here’s how companies must adapt. — from weforum.org by Joe Kaeser, President and Chief Executive Officer, Siemens AG

Excerpts (emphasis DSC):

Although we have only seen the beginning, one thing is already clear: the Fourth Industrial Revolution is the greatest transformation human civilization has ever known. As far-reaching as the previous industrial revolutions were, they never set free such enormous transformative power.

The Fourth Industrial Revolution is transforming practically every human activity...its scope, speed and reach are unprecedented.

Enormous power (Insert from DSC: What I was trying to get at here) entails enormous risk. Yes, the stakes are high. 

 

“And make no mistake about it: we are now writing the code that will shape our collective future.” CEO of Siemens AG

 

 

Contrary to Milton Friedman’s maxim, the business of business should not just be business. Shareholder value alone should not be the yardstick. Instead, we should make stakeholder value, or better yet, social value, the benchmark for a company’s performance.

Today, stakeholders…rightfully expect companies to assume greater social responsibility, for example, by protecting the climate, fighting for social justice, aiding refugees, and training and educating workers. The business of business should be to create value for society.

This seamless integration of the virtual and the physical worlds in so-called cyber-physical systems – that is the giant leap we see today. It eclipses everything that has happened in industry so far. As in previous industrial revolutions but on a much larger scale, the Fourth Industrial Revolution will eliminate millions of jobs and create millions of new jobs.

 

“…because the Fourth Industrial Revolution runs on knowledge, we need a concurrent revolution in training and education.

If the workforce doesn’t keep up with advances in knowledge throughout their lives, how will the millions of new jobs be filled?” 

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 


From DSC:
At least three critically important things jump out at me here:

  1. We are quickly approaching a time when people will need to be able to reinvent themselves quickly and cost-effectively, especially those with families and who are working in their (still existing) jobs. (Or have we already entered this period of time…?)
  2. There is a need to help people identify which jobs are safe to reinvent themselves to — at least for the next 5-10 years.
  3. Citizens across the globe — and their relevant legislatures, governments, and law schools — need to help close the gap between emerging technologies and whether those technologies should even be rolled out, and if so, how and with which features.

 


 

What freedoms and rights should individuals have in the digital age?

Joe Kaeser, President and Chief Executive Officer, Siemens AG

 

 

5 influencers predict AI’s impact on business in 2019 — from martechadvisor.com by Christine Crandell

Excerpt:

With Artificial Intelligence (AI) already proving its worth to adopters, it’s not surprising that an increasing number of companies will implement and leverage AI in 2019. Now, it’s no longer a question of whether AI will take off. Instead, it’s a question of which companies will keep up. Here are five predictions from five influencers on the impact AI will have on businesses in 2019, writes Christine Crandell, President, New Business Strategies.

 

 

Should we be worried about computerized facial recognition? — from newyorker.com by David Owen
The technology could revolutionize policing, medicine, even agriculture—but its applications can easily be weaponized.

 

Facial-recognition technology is advancing faster than the people who worry about it have been able to think of ways to manage it. Indeed, in any number of fields the gap between what scientists are up to and what nonscientists understand about it is almost certainly greater now than it has been at any time since the Manhattan Project. 

 

From DSC:
This is why law schools, legislatures, and the federal government need to become much more responsive to emerging technologies. The pace of technological change has changed. But have other important institutions of our society adapted to this new pace of change?

 

 

Andrew Ng sees an eternal springtime for AI — from zdnet.com by Tiernan Ray
Former Google Brain leader and Baidu chief scientist Andrew Ng lays out the steps companies should take to succeed with artificial intelligence, and explains why there’s unlikely to be another “AI winter” like in times past.

 

 

Google Lens now recognizes over 1 billion products — from venturebeat.com by Kyle Wiggers with thanks to Marie Conway for her tweet on this

Excerpt:

Google Lens, Google’s AI-powered analysis tool, can now recognize over 1 billion products from Google’s retail and price comparison portal, Google Shopping. That’s four times the number of objects Lens covered in October 2017, when it made its debut.

Aparna Chennapragada, vice president of Google Lens and augmented reality at Google, revealed the tidbit in a retrospective blog post about Google Lens’ milestones.

 

Amazon Customer Receives 1,700 Audio Files Of A Stranger Who Used Alexa — from npr.org by Sasha Ingber

Excerpt:

When an Amazon customer in Germany contacted the company to review his archived data, he wasn’t expecting to receive recordings of a stranger speaking in the privacy of a home.

The man requested to review his data in August under a European Union data protection law, according to a German trade magazine called c’t. Amazon sent him a download link to tracked searches on the website — and 1,700 audio recordings by Alexa that were generated by another person.

“I was very surprised about that because I don’t use Amazon Alexa, let alone have an Alexa-enabled device,” the customer, who was not named, told the magazine. “So I randomly listened to some of these audio files and could not recognize any of the voices.”

 

 

The Top 20 Education Next Articles of 2018 — from educationnext.org

Excerpt:

Every December, Education Next releases a list of the most popular articles we published over the course of the year based on readership.

The article that generated the most interest this year was one that looked at the policy of inclusion, or mainstreaming, in special education. A response to that article was our third most popular article of the year.

Some other popular articles were studies finding that teachers’ impact on non-cognitive skills is 10 times more predictive of students’ longer-term success than teachers’ impact on test scores; an analysis of the effectiveness of instructional coaching for teachers instead of regular professional development; and a look at whether teacher preparation programs can be evaluated based on the learning gains of their graduates’ students.

Other articles collected data on public support for higher teacher pay and greater school spending, the decline in private school attendance by middle school families, and whether states are lowering their proficiency standards.

Here’s the list of 2018’s Top 20 articles…

 

 

AI Now Law and Policy Reading List — from medium.com by the AI Now Institute

Excerpt:

Data-driven technologies are widely used in society to make decisions that affect many critical aspects of our lives, from health, education, employment, and criminal justice to economic, social and political norms. Their varied applications, uses, and consequences raise a number of unique and complex legal and policy concerns. As a result, it can be hard to figure out not only how these systems work but what to do about them.

As a starting point, AI Now offers this Law and Policy Reading List tailored for those interested in learning about key concepts, debates, and leading analysis on law and policy issues related to artificial intelligence and other emerging data-driven technologies.

 

Uber and Lyft drivers’ median hourly wage is just $3.37, report finds — from theguardian.com by Sam Levin
Majority of drivers make less than minimum wage and many end up losing money, according to study published by MIT

Excerpt (emphasis DSC):

Uber and Lyft drivers in the US make a median profit of $3.37 per hour before taxes, according to a new report that suggests a majority of ride-share workers make below minimum wage and that many actually lose money.

Researchers did an analysis of vehicle cost data and a survey of more than 1,100 drivers for the ride-hailing companies for the paper published by the Massachusetts Institute of Technology’s Center for Energy and Environmental Policy Research. The report – which factored in insurance, maintenance, repairs, fuel and other costs – found that 30% of drivers are losing money on the job and that 74% earn less than the minimum wage in their states.

The findings have raised fresh concerns about labor standards in the booming sharing economy as companies such as Uber and Lyft continue to face scrutiny over their treatment of drivers, who are classified as independent contractors and have few rights or protections.

“This business model is not currently sustainable,” said Stephen Zoepf, executive director of the Center for Automotive Research at Stanford University and co-author of the paper. “The companies are losing money. The businesses are being subsidized by [venture capital] money … And the drivers are essentially subsidizing it by working for very low wages.”

 


 

From DSC:
I don’t know enough about this to offer much feedback and/or insights on this sort of thing yet. But while it’s a bit too early for me to tell — and though I’m not myself a driver for Uber or Lyft — this article prompts me to put this type of thing on my radar.

That is, will the business models that arise from such a sharing economy only benefit a handful of owners or upper level managers or will such new business models benefit the majority of their employees? I’m very skeptical in these early stages though, as there aren’t likely medical or dental benefits, retirement contributions, etc. being offered to their employees with these types of companies. It likely depends upon the particular business model(s) and/or organization(s) being considered, but I think that it’s worth many of us watching this area.

 


 

Also see:

The Economics of Ride-Hailing: Driver Revenue, Expenses and Taxes— from ceepr.mit.edu / MIT Center for Energy and Environmental Policy Research by Stephen Zoepf, Stella Chen, Paa Adu, and Gonzalo Pozo

February 2018

We perform a detailed analysis of Uber and Lyft ride-hailing driver economics by pairing results from a survey of over 1100 drivers with detailed vehicle cost information. Results show that per hour worked, median profit from driving is $3.37/hour before taxes, and 74% of drivers earn less than the minimum wage in their state. 30% of drivers are actually losing money once vehicle expenses are included. On a per-mile basis, median gross driver revenue is $0.59/mile but vehicle operating expenses reduce real driver profit to a median of $0.29/mile. For tax purposes the $0.54/mile standard mileage deduction in 2016 means that nearly half of drivers can declare a loss on their taxes. If drivers are fully able to capitalize on these losses for tax purposes, 73.5% of an estimated U.S. market $4.8B in annual ride-hailing driver profit is untaxed.

Keywords: Transportation, Gig Economy, Cost-Bene t Analysis, Tax policy, Labor Center
Full Paper
| Research Brief

 

——-

Addendum on 3/7/18:

The ride-hailing wage war continues

How much do Lyft and Uber drivers really make? After reporting in a study that their median take-home pay was just 3.37/hour—and then getting called out by Uber’s CEO—researchers have significantly revised their findings.

Closer to a living wage: Lead author Stephen Zoepf of Stanford University released a statement on Twitter saying that using two different methods to recalculate the hourly wage, they find a salary of either $8.55 or $10 per hour, after expenses. Zoepf’s team will be doing a larger revision of the paper over the next few weeks.

Still low-balling it?: Uber and Lyft are adamant that even the new numbers underestimate what drivers are actually paid. “While the revised results are not as inaccurate as the original findings, driver earnings are still understated,” says Lyft’s director of communications Adrian Durbin.

The truth is out there: Depending on who’s doing the math, estimates range from $8.55 (Zoepf, et al.) up to over $21 an hour (Uber). In other words, we’re nowhere near a consensus on how much drivers in the gig-economy make.

 ——-

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian