10 things we should all demand from Big Tech right now — from vox.com by Sigal Samuel
We need an algorithmic bill of rights. AI experts helped us write one.

We need an algorithmic bill of rights. AI experts helped us write one.

Excerpts:

  1. Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
  2. Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
  3. Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
  4. Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes. (Inserted comment from DSC: Is this even possible? I hope so, but I have my doubts especially given the enormous lack of diversity within the large tech companies.)
  5. Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
  6. Portability: We have the right to easily transfer all our data from one provider to another.
  7. Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
  8. Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
  9. Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
  10. Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.

 

This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?

 

 

San Francisco becomes first city to bar police from using facial recognition— from cnet.com by Laura Hautala
It won’t be the last city to consider a similar law.

San Francisco becomes first city to bar police from using facial recognition

Excerpt:

The city of San Francisco approved an ordinance on Tuesday [5/14/19] barring the police department and other city agencies from using facial recognition technology on residents. It’s the first such ban of the technology in the country.

The ordinance, which passed by a vote of 8 to 1, also creates a process for the police department to disclose what surveillance technology they use, such as license plate readers and cell-site simulators that can track residents’ movements over time. But it singles out facial recognition as too harmful to residents’ civil liberties to even consider using.

“Facial surveillance technology is a huge legal and civil liberties risk now due to its significant error rate, and it will be worse when it becomes perfectly accurate mass surveillance tracking us as we move about our daily lives,” said Brian Hofer, the executive director of privacy advocacy group Secure Justice.

For example, Microsoft asked the federal government in July to regulate facial recognition technology before it gets more widespread, and said it declined to sell the technology to law enforcement. As it is, the technology is on track to become pervasive in airports and shopping centers and other tech companies like Amazon are selling the technology to police departments.

 

Also see:

 

From LinkedIn.com today:

 


Also see:


 

From DSC:
I don’t like this at all. If this foot gets in the door, vendor after vendor will launch their own hordes of drones. In the future, where will we go if we want some piece and quiet? Will the air be filled with swarms of noisy drones? Will we be able to clearly see the sun? An exaggeration..? Maybe…maybe not.

But, now what? What recourse do citizens have? Readers of this blog know that I’m generally pro-technology. But the folks — especially the youth — working within the FAANG companies (and the like) need to do a far better job asking, “Just because we can do something, should we do it?”

As I’ve said before, we’ve turned over the keys to the $137,000 Maserati to drivers who are just getting out of driving school. Then we wonder….”How did we get to this place?” 

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

As another example, just because we can…

just because we can does not mean we should

 

…doesn’t mean we should.

 

just because we can does not mean we should

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 


Example articles from the Privacy Project:

  • James Bennet: Do You Know What You’ve Given Up?
  • A. G. Sulzberger: How The Times Thinks About Privacy
  • Samantha Irby: I Don’t Care. I Love My Phone.
  • Tim Wu: How Capitalism Betrayed Privacy

 

 

Microsoft rolls out healthcare bot: How it will change healthcare industry — from yourtechdiet.com by Brian Curtis

Excerpt:

AI and the Healthcare Industry
This technology is evidently the game changer in the healthcare industry. According to the reports by Frost & Sullivan, the AI market for healthcare is likely to experience a CAGR of 40% by 2021, and has the potential to change industry outcomes by 30-40%, while cutting treatment costs in half.

In the words of Satya Nadella, “AI is the runtime that is going to shape all of what we do going forward in terms of the applications as well as the platform advances”.

Here are a few ways Microsoft’s Healthcare Bot will shape the Healthcare Industry…

 

Also see:

  • Why AI will make healthcare personal — from weforum.org by Peter Schwartz
    Excerpt:
    Digital assistants to provide a 24/7 helping hand
    The digital assistants of the future will be full-time healthcare companions, able to monitor a patient’s condition, transmit results to healthcare providers, and arrange virtual and face-to-face appointments. They will help manage the frequency and dosage of medication, and provide reliable medical advice around the clock. They will remind doctors of patients’ details, ranging from previous illnesses to past drug reactions. And they will assist older people to access the care they need as they age, including hospice care, and help to mitigate the fear and loneliness many elderly people feel.

 

  • Introducing New Alexa Healthcare Skills — from developer.amazon.com by Rachel Jiang
    Excerpts:
    The new healthcare skills that launched today are:Express Scripts (a leading Pharmacy Services Organization)
    Cigna Health Today (by Cigna, the global health service company)
    My Children’s Enhanced Recovery After Surgery (ERAS) (by Boston Children’s Hospital, a leading children’s hospital)
    Swedish Health Connect (by Providence St. Joseph Health, a healthcare system with 51 hospitals across 7 states and 829 clinics)
    Atrium Health (a healthcare system with more than 40 hospitals and 900 care locations throughout North and South Carolina and Georgia)
    Livongo (a leading consumer digital health company that creates new and different experiences for people with chronic conditions)

Voice as the Next Frontier for Conveniently Accessing Healthcare Services

 

  • Got health care skills? Big Tech wants to hire you — from linkedin.com Jaimy Lee
    Excerpt:
    As tech giants like Amazon, Apple and Google place bigger and bigger bets on the U.S. health care system, it should come as no surprise that the rate at which they are hiring workers with health care skills is booming.We took a deep dive into the big tech companies on this year’s LinkedIn Top Companies list in the U.S., uncovering the most popular health care skills among their workers — and what that says about the future of health care in America.
 

Check out the top 10:

1) Alphabet (Google); Internet
2) Facebook; Internet
3) Amazon; Internet
4) Salesforce; Internet
5) Deloitte; Management Consulting
6) Uber; Internet
7) Apple; Consumer Electronics
8) Airbnb; Internet
9) Oracle; Information Technology & Services
10) Dell Technologies; Information Technology & Services

 

Is Thomas Frey right? “…by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet.”

From a fairly recent e-newsletter from edsurge.com — though I don’t recall the exact date (emphasis DSC):

New England is home to some of the most famous universities in the world. But the region has also become ground zero for the demographic shifts that promise to disrupt higher education.

This week saw two developments that fit the narrative. On Monday, Southern Vermont College announced that it would shut its doors, becoming the latest small rural private college to do so. Later that same day, the University of Massachusetts said it would start a new online college aimed at a national audience, noting that it expects campus enrollments to erode as the number of traditional college-age students declines in the coming years.

“Make no mistake—this is an existential threat to entire sectors of higher education,” said UMass president Marty Meehan in announcing the online effort.

The approach seems to parallel the U.S. retail sector, where, as a New York Times piece outlines this week, stores like Target and WalMart have thrived by building online strategies aimed at competing with Amazon, while stores like Gap and Payless, which did little to move online, are closing stores. Of course, college is not like any other product or service, and plenty of campuses are touting the richness of the experience that students get by actually coming to a campus. And it’s not clear how many colleges can grow online to a scale that makes their investments pay off.

 

“It’s predicted that over the next several years, four to five major national players with strong regional footholds will be established. We intend to be one of them.”

University of Massachusetts President Marty Meehan

 

 

From DSC:
That last quote from UMass President Marty Meehan made me reflect upon the idea of having one or more enormous entities that will provide “higher education” in the future. I wonder if things will turn out to be that we’ll have more lifelong learning providers and platforms in the future — with the idea of a 60-year curriculum being an interesting idea that may come into fruition.

Long have I predicted that such an enormous entity would come to pass. Back in 2008, I named it the Forthcoming Walmart of Education. But then as the years went by, I got bumbed out on some things that Walmart was doing, and re-branded it the Forthcoming Amazon.com of Higher Education. We’ll see how long that updated title lasts — but you get the point. In fact, the point aligns very nicely with what futurist Thomas Frey has been predicting for years as well:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider. (source)

I realize that education doesn’t always scale well…but I’m thinking that how people learn in the future may be different than how we did things in the past…communities of practice comes to mind…as does new forms of credentialing…as does cloud-based learner profiles…as does the need for highly efficient, cost-effective, and constant opportunities/means to reinvent oneself.

Also see:

 

 

Addendum:

74% of consumers go to Amazon when they’re ready to buy something. That should be keeping retailers up at night. — from cnbc.com

Key points (emphasis DSC)

  • Amazon remains a looming threat for some of the biggest retailers in the country — like Walmart, Target and Macy’s.
  • When consumers are ready to buy a specific product, nearly three-quarters of them, or 74 percent, are going straight to Amazon to do it, according to a new study by Feedvisor.
  • By the end of this year, Amazon is expected to account for 52.4 percent of the e-commerce market in the U.S., up from 48 percent in 2018.

 

“In New England, there will be between 32,000 and 54,000 fewer college-aged students just seven years from now,” Meehan said. “That means colleges and universities will have too much capacity and not enough demand at a time when the economic model in higher education is already straining under its own weight.” (Marty Meehan at WBUR)

 

 

A prediction for blockchain transformation in higher education  — from blockchain.capitalmarketsciooutlook.com by Michael Mathews

Excerpt:

Ironically, blockchain entered the scene in a very neutral way, while Bitcoin created all the noise, simply because it used an aspect of blockchain. Bitcoin, cyber coins, and/or token concepts will come and go, just as the various forms of web browsers did. However, just as the Internet lives on, so will blockchain. In fact, blockchain may very well become the best of the Internet and IoT merged with the trust factor of the ISBN/MARC code concept. As history unveils itself blockchain will stand the test of time and become a form of a future generation of the Internet (i.e. Internet 4.0) without the need for cyber security.

With a positive prediction on blockchain for future coupled with lessons learned from the Internet, blockchain will become the single largest influencer on education. I have only gone on record of predicting two shifts in technology over a 5-10 year period of time, and both have come to pass now. This is my third prediction, with the greatest potential for transformation.

 

 

What I did not know until last year was a neutral technology called blockchain would show up in the history of the world; and at the same time Amazon would start designing blockchain templates to reduce all the processes to allow educational decisions to become as easy as ordering and receiving Amazon products.

 

 

Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 

Amazon is pushing facial technology that a study says could be biased — from nytimes.com by Natasha Singer
In new tests, Amazon’s system had more difficulty identifying the gender of female and darker-skinned faces than similar services from IBM and Microsoft.

Excerpt:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

 

 

A landmark ruling gives new power to sue tech giants for privacy harms — from fastcompany.com by Katharine Schwab

Excerpt:

A unanimous ruling by the Illinois Supreme Court says that companies that improperly gather people’s data can be sued for damages even without proof of concrete injuries, opening the door to legal challenges that Facebook, Google, and other businesses have resisted.

 

 

Online curricula helps teachers tackle AI in the classroom — from educationdive.com by Lauren Barack

Dive Brief:

  • Schools may already use some form of artificial intelligence (AI), but hardly any have curricula designed to teach K-12 students how it works and how to use it, wrote EdSurge. However, organizations such as the International Society for Technology in Education (ISTE) are developing their own sets of lessons that teachers can take to their classrooms.
  • Members of “AI for K-12” — an initiative co-sponsored by the Association for the Advancement of Artificial Intelligence and the Computer Science Teachers Association — wrote in a paper that an AI curriculum should address five basic ideas:
    • Computers use sensors to understand what goes on around them.
    • Computers can learn from data.
    • With this data, computers can create models for reasoning.
    • While computers are smart, it’s hard for them to understand people’s emotions, intentions and natural languages, making interactions less comfortable.
    • AI can be a beneficial tool, but it can also harm society.
  • These kinds of lessons are already at play among groups including the Boys and Girls Club of Western Pennsylvania, which has been using a program from online AI curriculum site ReadyAI. The education company lent its AI-in-a-Box kit, which normally sells for $3,000, to the group so it could teach these concepts.

 

AI curriculum is coming for K-12 at last. What will it include? — from edsurge.com by Danielle Dreilinger

Excerpt:

Artificial intelligence powers Amazon’s recommendations engine, Google Translate and Siri, for example. But few U.S. elementary and secondary schools teach the subject, maybe because there are so few curricula available for students. Members of the “AI for K-12” work group wrote in a recent Association for the Advancement of Artificial Intelligence white paper that “unlike the general subject of computing, when it comes to AI, there is little guidance for teaching at the K-12 level.”

But that’s starting to change. Among other advances, ISTE and AI4All are developing separate curricula with support from General Motors and Google, respectively, according to the white paper. Lead author Dave Touretzky of Carnegie Mellon has developed his own curriculum, Calypso. It’s part of the “AI-in-a-Box” kit, which is being used by more than a dozen community groups and school systems, including Carter’s class.

 

 

 

 

Amazon has 10,000 employees dedicated to Alexa — here are some of the areas they’re working on — from businessinsider.com by Avery Hartmans

Summary (emphasis DSC):

  • Amazon’s vice president of Alexa, Steve Rabuchin, has confirmed that yes, there really are 10,000 Amazon employees working on Alexa and the Echo.
  • Those employees are focused on things like machine learning and making Alexa more knowledgeable.
  • Some employees are working on giving Alexa a personality, too.

 

 

From DSC:
How might this trend impact learning spaces? For example, I am interested in using voice to intuitively “drive” smart classroom control systems:

  • “Alexa, turn on the projector”
  • “Alexa, dim the lights by 50%”
  • “Alexa, open Canvas and launch my Constitutional Law I class”

 

 

 

Big tech may look troubled, but it’s just getting started — from nytimes.com by David Streitfeld

Excerpt:

SAN JOSE, Calif. — Silicon Valley ended 2018 somewhere it had never been: embattled.

Lawmakers across the political spectrum say Big Tech, for so long the exalted embodiment of American genius, has too much power. Once seen as a force for making our lives better and our brains smarter, tech is now accused of inflaming, radicalizing, dumbing down and squeezing the masses. Tech company stocks have been pummeled from their highs. Regulation looms. Even tech executives are calling for it.

The expansion underlines the dizzying truth of Big Tech: It is barely getting started.

 

“For all intents and purposes, we’re only 35 years into a 75- or 80-year process of moving from analog to digital,” said Tim Bajarin, a longtime tech consultant to companies including Apple, IBM and Microsoft. “The image of Silicon Valley as Nirvana has certainly taken a hit, but the reality is that we the consumers are constantly voting for them.”

 

Big Tech needs to be regulated, many are beginning to argue, and yet there are worries about giving that power to the government.

Which leaves regulation up to the companies themselves, always a dubious proposition.

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian