Why AI is a threat to democracy — and what we can do to stop it — from technologyreview.com by Karen Hao and Amy Webb

Excerpt:

Universities must create space in their programs for hybrid degrees. They should incentivize CS students to study comparative literature, world religions, microeconomics, cultural anthropology and similar courses in other departments. They should champion dual degree programs in computer science and international relations, theology, political science, philosophy, public health, education and the like. Ethics should not be taught as a stand-alone class, something to simply check off a list. Schools must incentivize even tenured professors to weave complicated discussions of bias, risk, philosophy, religion, gender, and ethics in their courses.

One of my biggest recommendations is the formation of GAIA, what I call the Global Alliance on Intelligence Augmentation. At the moment people around the world have very different attitudes and approaches when it comes to data collection and sharing, what can and should be automated, and what a future with more generally intelligent systems might look like. So I think we should create some kind of central organization that can develop global norms and standards, some kind of guardrails to imbue not just American or Chinese ideals inside AI systems, but worldviews that are much more representative of everybody.

Most of all, we have to be willing to think about this much longer-term, not just five years from now. We need to stop saying, “Well, we can’t predict the future, so let’s not worry about it right now.” It’s true, we can’t predict the future. But we can certainly do a better job of planning for it.

 

 

 
 

Police across the US are training crime-predicting AIs on falsified data — from technologyreview.com by Karen Hao
A new report shows how supposedly objective systems can perpetuate corrupt policing practices.

Excerpts (emphasis DSC):

Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.


But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.

“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study.

 

How AI is enhancing wearables — from techopedia.com by Claudio Butticev
Takeaway: Wearable devices have been helping people for years now, but the addition of AI to these wearables is giving them capabilities beyond anything seen before.

Excerpt:

Restoring Lost Sight and Hearing – Is That Really Possible?
People with sight or hearing loss must face a lot of challenges every day to perform many basic activities. From crossing the street to ordering food on the phone, even the simplest chore can quickly become a struggle. Things may change for these struggling with sight or hearing loss, however, as some companies have started developing machine learning-based systems to help the blind and visually impaired find their way across cities, and the deaf and hearing impaired enjoy some good music.

German AI company AiServe combined computer vision and wearable hardware (camera, microphone and earphones) with AI and location services to design a system that is able to acquire data over time to help people navigate through neighborhoods and city blocks. Sort of like a car navigation system, but in a much more adaptable form which can “learn how to walk like a human” by identifying all the visual cues needed to avoid common obstacles such as light posts, curbs, benches and parked cars.

 

From DSC:
So once again we see the pluses and minuses of a given emerging technology. In fact, most technologies can be used for good or for ill. But I’m left with asking the following questions:

  • As citizens, what do we do if we don’t like a direction that’s being taken on a given technology or on a given set of technologies? Or on a particular feature, use, process, or development involved with an emerging technology?

One other reflection here…it’s the combination of some of these emerging technologies that will be really interesting to see what happens in the future…again, for good or for ill. 

The question is:
How can we weigh in?

 

Also relevant/see:

AI Now Report 2018 — from ainowinstitute.org, December 2018

Excerpt:

University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Furthermore, it is long overdue for technology companies to directly address the cultures of exclusion and discrimination in the workplace. The lack of diversity and ongoing tactics of harassment, exclusion, and unequal pay are not only deeply harmful to employees in these companies but also impacts the AI products they release, producing tools that perpetuate bias and discrimination.

The current structure within which AI development and deployment occurs works against meaningfully addressing these pressing issues. Those in a position to profit are incentivized to accelerate the development and application of systems without taking the time to build diverse teams, create safety guardrails, or test for disparate impacts. Those most exposed to harm from 42 these systems commonly lack the financial means and access to accountability mechanisms that would allow for redress or legal appeals. 233 This is why we are arguing for greater funding for public litigation, labor organizing, and community participation as more AI and algorithmic systems shift the balance of power across many institutions and workplaces.

 

Also relevant/see:

 

 

Google and Microsoft warn that AI may do dumb things — from wired.com by Tom Simonite

Excerpt:

Alphabet likes to position itself as a leader in AI research, but it was six months behind rival Microsoft in warning investors about the technology’s ethical risks. The AI disclosure in Google’s latest filing reads like a trimmed down version of much fuller language Microsoft put in its most recent annual SEC report, filed last August:

“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

 

Chinese company leaves Muslim-tracking facial recognition database exposed online — from by Catalin Cimpanu
Researcher finds one of the databases used to track Uyghur Muslim population in Xinjiang.

Excerpt:

One of the facial recognition databases that the Chinese government is using to track the Uyghur Muslim population in the Xinjiang region has been left open on the internet for months, a Dutch security researcher told ZDNet.

The database belongs to a Chinese company named SenseNets, which according to its website provides video-based crowd analysis and facial recognition technology.

The user data wasn’t just benign usernames, but highly detailed and highly sensitive information that someone would usually find on an ID card, Gevers said. The researcher saw user profiles with information such as names, ID card numbers, ID card issue date, ID card expiration date, sex, nationality, home addresses, dates of birth, photos, and employer.

Some of the descriptive names associated with the “trackers” contained terms such as “mosque,” “hotel,” “police station,” “internet cafe,” “restaurant,” and other places where public cameras would normally be found.

 

From DSC:
Readers of this blog will know that I’m generally pro-technology. But especially focusing in on that last article, to me, privacy is key here. For which group of people from which nation is next? Will Country A next be tracking Christians? Will Country B be tracking people of a given sexual orientation? Will Country C be tracking people with some other characteristic?

Where does it end? Who gets to decide? What will be the costs of being tracked or being a person with whatever certain characteristic one’s government is tracking? What forums are there for combating technologies or features of technologies that we don’t like or want?

We need forums/channels for raising awareness and voting on these emerging technologies. We need informed legislators, senators, lawyers, citizens…we need new laws here…asap.

 

 

 

Learning and Student Success: Presenting the Results of the 2019 Key Issues Survey — from er.educause.edu by Malcolm Brown

Excerpts:

Here are some results that caught (Malcolm’s) eye, with a few speculations tossed in:

  • The issue of faculty development reclaimed the top spot.
  • Academic transformation, previously a consistent top-three finisher, took a tumble in 2019 down to 10th.
  • After falling to 16th last year, the issue of competency-based education and new methods of learning assessment jumped up to 6th for 2019.
  • The issues of accessibility and universal design for learning (UDL) and of digital and information literacy held more or less steady.
  • Online and blended learning has rebounded significantly.

 

 

 

Why Facebook’s banned “Research” app was so invasive — from wired.com by Louise Matsakislo

Excerpts:

Facebook reportedly paid users between the ages of 13 and 35 $20 a month to download the app through beta-testing companies like Applause, BetaBound, and uTest.


Apple typically doesn’t allow app developers to go around the App Store, but its enterprise program is one exception. It’s what allows companies to create custom apps not meant to be downloaded publicly, like an iPad app for signing guests into a corporate office. But Facebook used this program for a consumer research app, which Apple says violates its rules. “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple,” a spokesperson said in a statement. “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Facebook didn’t respond to a request for comment.

Facebook needed to bypass Apple’s usual policies because its Research app is particularly invasive. First, it requires users to install what is known as a “root certificate.” This lets Facebook look at much of your browsing history and other network data, even if it’s encrypted. The certificate is like a shape-shifting passport—with it, Facebook can pretend to be almost anyone it wants.

To use a nondigital analogy, Facebook not only intercepted every letter participants sent and received, it also had the ability to open and read them. All for $20 a month!

Facebook’s latest privacy scandal is a good reminder to be wary of mobile apps that aren’t available for download in official app stores. It’s easy to overlook how much of your information might be collected, or to accidentally install a malicious version of Fortnite, for instance. VPNs can be great privacy tools, but many free ones sell their users’ data in order to make money. Before downloading anything, especially an app that promises to earn you some extra cash, it’s always worth taking another look at the risks involved.

 

2019 Top 10 IT Issues — from educause.edu

2019 Reveals Focus on “Student Genome”
In 2019, after a decade of preparing, higher education stands on a threshold. A new era of technology is ushering in myriad opportunities to apply data that supports and advances our higher ed mission. This threshold is similar to the one science stood on in the late 20th century: the prospect of employing technology to put genetic information to use meaningfully and ethically. Much in the same way, higher education must first “sequence” the data before we can apply it with any reliability or precision.

Our focus in 2019, then, is to organize, standardize, and safeguard data before applying it to our most pressing priority: student success.

The issues cluster into three themes:

  • Empowered Students: In their drive to improve student outcomes, institutions are increasingly focused on individual students, on their life circumstances, and on their entire academic journey. Leaders are relying on analytics and technology to make progress. Related issues: 2 and 4
  • Trusted Data: This is the work of the Student Genome Project, where the “sequencing” is taking place. Institutions are collecting, securing, integrating, and standardizing data and preparing the institution to use data meaningfully and ethically. Related issues: 1, 3, 5, 6 and 8
  • 21st Century Business Strategies: This is the leadership journey, in which institutions address today’s funding challenges and prepare for tomorrow’s more competitive ecosystem. Technology is now embedded into teaching and learning, research, and business operations and so must be embedded into the institutional strategy and business model. Related issues: 7, 9 and 10

 

 

 

 

The information below is per Laura Kelley (w/ Page 1 Solutions)


As you know, Apple has shut down Facebook’s ability to distribute internal iOS apps. The shutdown comes following news that Facebook has been using Apple’s program for internal app distribution to track teenage customers for “research.”

Dan Goldstein is the president and owner of Page 1 Solutions, a full-service digital marketing agency. He manages the needs of clients along with the need to ensure protection of their consumers, which has become one of the top concerns from clients over the last year. Goldstein is also a former attorney so he balances the marketing side with the legal side when it comes to protection for both companies and their consumers. He says while this is another blow for Facebook, it speaks volumes for Apple and its concern for consumers,

“Facebook continues to demonstrate that it does not value user privacy. The most disturbing thing about this news is that Facebook knew that its app violated Apples terms of service and continued to distribute the app to consumers after it was banned from the App Store. This shows, once again, that Facebook doesn’t value user privacy and goes to great lengths to collect private behavioral data to give it a competitive advantage.The FTC is already investigating Facebook’s privacy policies and practices.As Facebook’s efforts to collect and use private data continue to be exposed, it risks losing market share and may prompt additional governmental investigations and regulation,” Goldstein says.

“One positive that comes out of this story is that Apple seems to be taking a harder line on protecting user privacy than other tech companies. Apple has been making noises about protecting user privacy for several months. This action indicates that it is attempting to follow through on its promises,” Goldstein says.

 

 

Amazon is pushing facial technology that a study says could be biased — from nytimes.com by Natasha Singer
In new tests, Amazon’s system had more difficulty identifying the gender of female and darker-skinned faces than similar services from IBM and Microsoft.

Excerpt:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

 

 

A landmark ruling gives new power to sue tech giants for privacy harms — from fastcompany.com by Katharine Schwab

Excerpt:

A unanimous ruling by the Illinois Supreme Court says that companies that improperly gather people’s data can be sued for damages even without proof of concrete injuries, opening the door to legal challenges that Facebook, Google, and other businesses have resisted.

 

 

Apple’s Tim Cook says it’s time for America to get serious about data privacy — from barrons.com by David Marino-Nachison

Excerpt:

Apple CEO Tim Cook, who has said the U.S. should pass a federal data-privacy law, on Thursday said legislation should also give consumers more information about who buys and sells their data.

In a column published by Time, Cook said the Federal Trade Commission should set up a “clearinghouse” that requires data brokers to register and lets consumers “track the transactions that have bundled and sold their data from place to place, and giving users the power to delete their data on demand, freely, easily and…

 

 

Training the workforce of the future: Education in America will need to adapt to prepare students for the next generation of jobs – including ‘data trash engineer’ and ‘head of machine personality design’– from dailymail.co.uk by Valerie Bauman

Excerpts:

  • Careers that used to safely dodge the high-tech bullet will soon require at least a basic grasp of things like web design, computer programming and robotics – presenting a new challenge for colleges and universities
  • A projected 85 percent of the jobs that today’s college students will have in 2030 haven’t been invented yet
  • The coming high-tech changes are expected to touch a wider variety of career paths than ever before
  • Many experts say American universities aren’t ready for the change because the high-tech skills most workers will need are currently focused just on people specializing in science, technology, engineering and math

.

 

 

5 influencers predict AI’s impact on business in 2019 — from martechadvisor.com by Christine Crandell

Excerpt:

With Artificial Intelligence (AI) already proving its worth to adopters, it’s not surprising that an increasing number of companies will implement and leverage AI in 2019. Now, it’s no longer a question of whether AI will take off. Instead, it’s a question of which companies will keep up. Here are five predictions from five influencers on the impact AI will have on businesses in 2019, writes Christine Crandell, President, New Business Strategies.

 

 

Should we be worried about computerized facial recognition? — from newyorker.com by David Owen
The technology could revolutionize policing, medicine, even agriculture—but its applications can easily be weaponized.

 

Facial-recognition technology is advancing faster than the people who worry about it have been able to think of ways to manage it. Indeed, in any number of fields the gap between what scientists are up to and what nonscientists understand about it is almost certainly greater now than it has been at any time since the Manhattan Project. 

 

From DSC:
This is why law schools, legislatures, and the federal government need to become much more responsive to emerging technologies. The pace of technological change has changed. But have other important institutions of our society adapted to this new pace of change?

 

 

Andrew Ng sees an eternal springtime for AI — from zdnet.com by Tiernan Ray
Former Google Brain leader and Baidu chief scientist Andrew Ng lays out the steps companies should take to succeed with artificial intelligence, and explains why there’s unlikely to be another “AI winter” like in times past.

 

 

Google Lens now recognizes over 1 billion products — from venturebeat.com by Kyle Wiggers with thanks to Marie Conway for her tweet on this

Excerpt:

Google Lens, Google’s AI-powered analysis tool, can now recognize over 1 billion products from Google’s retail and price comparison portal, Google Shopping. That’s four times the number of objects Lens covered in October 2017, when it made its debut.

Aparna Chennapragada, vice president of Google Lens and augmented reality at Google, revealed the tidbit in a retrospective blog post about Google Lens’ milestones.

 

Amazon Customer Receives 1,700 Audio Files Of A Stranger Who Used Alexa — from npr.org by Sasha Ingber

Excerpt:

When an Amazon customer in Germany contacted the company to review his archived data, he wasn’t expecting to receive recordings of a stranger speaking in the privacy of a home.

The man requested to review his data in August under a European Union data protection law, according to a German trade magazine called c’t. Amazon sent him a download link to tracked searches on the website — and 1,700 audio recordings by Alexa that were generated by another person.

“I was very surprised about that because I don’t use Amazon Alexa, let alone have an Alexa-enabled device,” the customer, who was not named, told the magazine. “So I randomly listened to some of these audio files and could not recognize any of the voices.”

 

 

Why should anyone believe Facebook anymore? — from wired.com by Fred Vogelstein

Excerpt:

Just since the end of September, Facebook announced the biggest security breach in its history, affecting more than 30 million accounts. Meanwhile, investigations in November revealed that, among other things, the company had hired a Washington firm to spread its own brand of misinformation on other platforms, including borderline anti-Semitic stories about financier George Soros. Just two weeks ago, a cache of internal emails dating back to 2012 revealed that at times Facebook thought a lot more about how to make money off users’ data than about how to protect it.

Now, according to a New York Times investigation into Facebook’s data practices published Tuesday, long after Facebook said it had taken steps to protect user data from the kinds of leakages that made Cambridge Analytica possible, the company continued to sustain special, undisclosed data-sharing arrangements with more than 150 companies—some into this year. Unlike with Cambridge Analytica, the Times says, Facebook provided access to its users’ data knowingly and on a greater scale.

 

What has enabled them to deliver these apologies, year after year, was that these sycophantic monologues were always true enough to be believable. The Times’ story calls into question every one of those apologies—especially the ones issued this year.

There’s a simple takeaway from all this, and it’s not a pretty one: Facebook is either a mendacious, arrogant corporation in the mold of a 1980s-style Wall Street firm, or it is a company in much more disarray than it has been letting on. 

It’s hard to process this without finally realizing what it is that’s made us so angry with Silicon Valley, and Facebook in particular, in 2018: We feel lied to, like these companies are playing us, their users, for chumps, and they’re also laughing at us for being so naive.

 

 

Also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt:

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

 

Guide to how artificial intelligence can change the world – Part 3 — from intelligenthq.com by Maria Fonseca and Paula Newton
This is part 3 of a Guide in 4 parts about Artificial Intelligence. The guide covers some of its basic concepts, history and present applications, possible developments in the future, and also its challenges as opportunities.

Excerpt:

Artificial intelligence is considered to be anything that gives machines intelligence which allows them to reason in the way that humans can. Machine learning is an element of artificial intelligence which is when machines are programmed to learn. This is brought about through the development of algorithms that work to find patterns, trends and insights from data that is input into them to help with decision making. Deep learning is in turn an element of machine learning. This is a particularly innovative and advanced area of artificial intelligence which seeks to try and get machines to both learn and think like people.

 

Also see:

 

Also see:

LinkedIn’s 2018 U.S. emerging jobs report — from economicgraph.linkedin.com

Excerpt (emphasis DSC):

Our biggest takeaways from this year’s Emerging Jobs Report:

  • Artificial Intelligence (AI) is here to stay. No, this doesn’t mean robots are coming for your job, but we are likely to see continued growth in fields and functions related to AI. This year, six out of the 15 emerging jobs are related in some way to AI, and our research shows that skills related to AI are starting to infiltrate every industry, not just tech. In fact, AI skills are among the fastest-growing skills on LinkedIn, and globally saw a 190% increase from 2015 to 2017.

 

 
© 2025 | Daniel Christian