A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI — from wired.com by Khari Johnson
Researchers are encouraging those who work in AI to explicitly consider racism, gender, and other structural inequalities.

Excerpt:

FORMS OF AUTOMATION such as artificial intelligence increasingly inform decisions about who gets hired, is arrested, or receives health care. Examples from around the world articulate that the technology can be used to exclude, control, or oppress people and reinforce historic systems of inequality that predate AI.

“Algorithms are animated by data, data comes from people, people make up society, and society is unequal,” the paper reads. “Algorithms thus arc towards existing patterns of power and privilege, marginalization, and disadvantage.

 

The biggest tech trends of 2022, according to over 40 experts — from fastcompany.com by Mark Sullivan
Startup founders, Big Tech execs, VCs, and tech scholars offer their predictions on how Web3, the metaverse, and other emerging ideas will shape the next year.

We asked startup founders, Big Tech execs, VCs, scholars, and other experts to speculate on the coming year within their field of interest. Altogether, we collected more than 40 predictions about 2022. Together, they offer a smart composite look at the things we’re likely to be talking about by this time next year.

 

From DSC:
As I looked at the article below, I couldn’t help but wonder…what is the role of the American Bar Association (ABA) in this type situation? How can the ABA help the United States deal with the impact/place of emerging technologies?


Clearview AI will get a US patent for its facial recognition tech — from engadget.com by J. Fingas
Critics are worried the company is patenting invasive tech.

Excerpt:

Clearview AI is about to get formal acknowledgment for its controversial facial recognition technology. Politico reports Clearview has received a US Patent and Trademark Office “notice of allowance” indicating officials will approve a filing for its system, which scans faces across public internet data to find people from government lists and security camera footage. The company just has to pay administrative fees to secure the patent.

In a Politico interview, Clearview founder Hoan Ton-That claimed this was the first facial recognition patent involving “large-scale internet data.” The firm sells its tool to government clients (including law enforcement) hoping to accelerate searches.

As you might imagine, there’s a concern the USPTO is effectively blessing Clearview’s technology and giving the company a chance to grow despite widespread objections to its technology’s very existence. 

Privacy, news, facial recognition, USPTO, internet, patent,
Clearview AI, surveillance, tomorrow, AI, artificial intelligence

 

From DSC:
From my perspective, both of the items below are highly-related to each other:

Let’s Teach Computer Science Majors to Be Good Citizens. The Whole World Depends on It. — from edsurge.com by Anne-Marie Núñez, Matthew J. Mayhew, Musbah Shaheen and Laura S. Dahl

Excerpt:

Change may need to start earlier in the workforce development pipeline. Undergraduate education offers a key opportunity for recruiting students from historically underrepresented racial and ethnic, gender, and disability groups into computing. Yet even broadened participation in college computer science courses may not shift the tech workforce and block bias from seeping into tech tools if students aren’t taught that diversity and ethics are essential to their field of study and future careers.

Computer Science Majors Lack Citizenship Preparation
Unfortunately, those lessons seem to be missing from many computer science programs.

…and an excerpt from Why AI can’t really filter out “hate news” — with thanks to Sam DeBrule for this resource (emphasis DSC):

The incomprehensibility and unexplainability of huge algorithms
Michael Egnor: What terrifies me about artificial intelligence — and I don’t think one can overstate this danger — is that artificial intelligence has two properties that make it particularly deadly in human civilization. One is concealment. Even though every single purpose in artificial intelligence is human, it’s concealed. We don’t really understand it. We don’t understand Google’s algorithms.

There may even be a situation where Google doesn’t understand Google’s algorithms. But all of it comes from the people who run Google. So the concealment is very dangerous. We don’t know what these programs are doing to our culture. And it may be that no one knows, but they are doing things.

Note:Roman Yampolskiy has written about the incomprehensibility and unexplainability of AI: “Human beings are finite in our abilities. For example, our short term memory is about 7 units on average. In contrast, an AI can remember billions of items and AI capacity to do so is growing exponentially. While never infinite in a true mathematical sense, machine capabilities can be considered such in comparison with ours. This is true for memory, compute speed, and communication abilities.” So we have built-in bias and incomprehensibility at the same time.

From DSC:
That part about concealment reminds me that our society depends upon the state of the hearts of the tech leaders. We don’t like to admit that, but it’s true. The legal realm is too far behind to stop the Wild West of technological change. The legal realm is trying to catch up, but they’re coming onto the race track with no cars…just as pedestrians walking or running as fast as they can….all the while, the technological cars are whizzing by. 

The pace has changed significantly and quickly

 

The net effect of all of this is that we are more dependent upon the ethics, morals, and care for their fellow humankind (or not) of the C-Suites out there (especially Facebook/Meta Platforms, Google, Microsoft, Amazon, Google, and Apple) than we care to admit. Are they producing products and services that aim to help our societies move forward, or are they just trying to make some more bucks? Who — or what — is being served?

The software engineers and software architects are involved here big time as well. “Just because we can doesn’t mean we should.” But that perspective is sometimes in short supply.

 

Over 60,000 Fake Applications Submitted in Student Aid Scheme, California Says — from nytimes.com from Vimal Patel
It was unclear how much money, if any, was disbursed to the suspicious students. The federal Education Department said it was investigating the suspected fraud.

Excerpt:

According to Mr. Perry, fraud of this nature is easier to pull off at community colleges than at four-year institutions, because the two-year institutions don’t have admissions committees vetting applicants. And while colleges have had some fully virtual components for many years, the pandemic — which forced many colleges to operate entirely online — has provided the conditions for such schemes to flourish. “Somebody trying to perpetuate this would think this was a more likely time to try to get away with this,” Mr. Perry said.

He added that the next step for federal investigators should be to determine how widespread this conduct is and whether colleges elsewhere should be on the lookout.

 

Timnit Gebru Says Artificial Intelligence Needs to Slow Down — from wired.com by Max Levy
The AI researcher, who left Google last year, says the incentives around AI research are all wrong.

Excerpt:

ARTIFICIAL INTELLIGENCE RESEARCHERS are facing a problem of accountability: How do you try to ensure decisions are responsible when the decision maker is not a responsible person, but rather an algorithm? Right now, only a handful of people and organizations have the power—and resources—to automate decision-making.

Since leaving Google, Gebru has been developing an independent research institute to show a new model for responsible and ethical AI research. The institute aims to answer similar questions as her Ethical AI team, without fraught incentives of private, federal, or academic research—and without ties to corporations or the Department of Defense.

“Our goal is not to make Google more money; it’s not to help the Defense Department figure out how to kill more people more efficiently,” she said.

From DSC:
What does our society need to do to respond to this exponential pace of technological change? And where is the legal realm here?

Speaking of the pace of change…the following quote from The Future Direction And Vision For AI (from marktechpost.com by Imtiaz Adam) speaks to massive changes in this decade as well:

The next generation will feature 5G alongside AI and will lead to a new generation of Tech superstars in addition to some of the existing ones.

In future the variety, volume and velocity of data is likely to substantially increase as we move to the era of 5G and devices at the Edge of the network. The author argues that our experience of development with AI and the arrival of 3G followed by 4G networks will be dramatically overshadowed with the arrival of AI meets 5G and the IoT leading to the rise of the AIoT where the Edge of the network will become key for product and service innovation and business growth.

Also related/see:

 
 
 

Artificial Intelligence: Should You Teach It To Your Employees?— from forbes.com by Tom Taulli

Excerpt:

“If more people are AI literate and can start to participate and contribute to the process, more problems–both big and small–across the organization can be tackled,” said David Sweenor, who is the Senior Director of Product Marketing at Alteryx. “We call this the ‘Democratization of AI and Analytics.’ A team of 100, 1,000, or 5,000 working on different problems in their areas of expertise certainly will have a bigger impact than if left in the hands of a few.”

New Artificial Intelligence Tool Accelerates Discovery of Truly New Materials — from scitechdaily.com
The new artificial intelligence tool has already led to the discovery of four new materials.

Excerpt:

Researchers at the University of Liverpool have created a collaborative artificial intelligence tool that reduces the time and effort required to discover truly new materials.

AI development must be guided by ethics, human wellbeing and responsible innovation — from healthcareitnews.com by Bill Siwicki
An expert in emerging technology from the IEEE Standards Association describes the elements that must be considered as artificial intelligence proliferates across healthcare.

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 

Many Americans aren’t aware they’re being tracked with facial recognition while shopping  — from techradar.com by Anthony Spadafora
You’re not just on camera, you’re also being tracked

Excerpt:

Despite consumer opposition to facial recognition, the technology is currently being used in retail stores throughout the US according to new research from Piplsay.

While San Francisco banned the police from using facial recognition back in 2019 and the EU called for a five year ban on the technology last year, several major retailers in the US including Lowe’s, Albertsons and Macy’s have been using it for both fraud and theft detection.

From DSC:
I’m not sure how prevalent this practice is…and that’s precisely the point. We don’t know what all of those cameras are actually doing in our stores, gas stations, supermarkets, etc. I put this in the categories of policy, law schools, legal, government, and others as the legislative and legal realm need to scramble to catch up to this Wild Wild West.

Along these lines, I was watching a portion of 60 minutes last night where they were doing a piece on autonomous trucks (reportedly to hit the roads without a person sometime later this year). When asked about oversight, there was some…but not much.

Readers of this blog will know that I have often wondered…”How does society weigh in on these things?”

Along these same lines, also see:

  • The NYPD Had a Secret Fund for Surveillance Tools — from wired.com by Sidney Fussell
    Documents reveal that police bought facial-recognition software, vans equipped with x-ray machines, and “stingray” cell site simulators—with no public oversight.
 

Google CEO Still Insists AI Revolution Bigger Than Invention of Fire — from gizmodo.com by Matt Novak
Pichai suggests the internet and electricity are also small potatoes compared to AI.

Excerpt:

The artificial intelligence revolution is poised to be more “profound” than the invention of electricity, the internet, and even fire, according to Google CEO Sundar Pichai, who made the comments to BBC media editor Amol Rajan in a podcast interview that first went live on Sunday.

“The progress in artificial intelligence, we are still in very early stages, but I viewed it as the most profound technology that humanity will ever develop and work on, and we have to make sure we do it in a way that we can harness it to society’s benefit,” Pichai said.

“But I expect it to play a foundational role pretty much across every aspect of our lives. You know, be it health care, be it education, be it how we manufacture things and how we consume information. 

 

The Future of Social Media: Re-Humanisation and Regulation — by Gerd Leonhard

How could social media become ‘human’ again? How can we stop the disinformation, dehumanisation and dataism that has resulted from social media’s algorithmic obsessions? I foresee that the EXTERNALTIES i.e. the consequences of unmitigated growth of exponential digital technologies will become just as big as the consequences of climate change. In fact, today, the social media industry already has quite a few parallels to the oil, gas and coal business: while private make huge profits from extracting the ‘oil’ (i.e. user data), the external damage is left to society and governments to fix. This needs to change! In this keynote I make some precise suggestions as to how that could happen.

Some snapshots/excerpts:

The future of social media -- a video by Gerd Leonhard in the summer of 2021

 

 

 

 


From DSC:
Gerd brings up some solid points here. His presentation and perspectives are not only worth checking out, but they’re worth some time for us to seriously reflect on what he’s saying.

What kind of future do we want?

And for you professors, teachers, instructional designers, trainers, and presenters out there, check out *how* he delivers the content. It’s well done and very engaging.


 

21 jobs of the future: A guide to getting — and staying — employed over the next 10 years — from cognizant.com and  the Center for The Future of Work

Excerpt:

WHAT THE NEXT 10 YEARS WILL BRING: NEW JOBS
In this report, we propose 21 new jobs that will emerge over the next 10 years and will become cornerstones of the future of work. In producing this report, we imagined hundreds of jobs that could emerge within the major macroeconomic, political, demographic, societal, cultural, business and technology trends observable today, e.g., growing populations, aging populations, populism, environmentalism, migration, automation, arbitrage, quantum physics, AI, biotechnology, space exploration, cybersecurity, virtual reality.

Among the jobs we considered, some seemed further out on the horizon and are not covered here: carbon farmers, 3-D printing engineers, avatar designers, cryptocurrency arbitrageurs, drone jockeys, human organ developers, teachers of English as a foreign language for robots, robot spa owners, algae farmers, autonomous fleet valets, Snapchat addiction therapists, urban vertical farmers and Hyperloop construction managers. These are jobs that younger generations may do in the further off future.

21 jobs on a chart where tech-centricity is on the vertical axis and the time horizon is on the horizontal axis. 21 jobs are represented in this graphic and report.

Also see:

Here are the top 10 jobs of the future — from bigthink.com by Robert Brown
Say hello to your new colleague, the Workplace Environment Architect.

Excerpt:

6. Algorithm Bias Auditor – “All online, all the time” lifestyles for work and leisure accelerated the competitive advantage derived from algorithms by digital firms everywhere. But from Brussels to Washington, given the increasing statutory scrutiny on data, it’s a near certainty that when it comes to how they’re built, verification through audits will help ensure the future workforce is also the fair workforce.

 

Let’s Teach Computer Science Majors to Be Good Citizens. The Whole World Depends on It. — from edsurge.com by Anne-Marie Núñez, Matthew J. Mayhew, Musbah Shaheen and Laura S. Dahl

Excerpt:

To mitigate the perpetuation of these and related inequities, observers have called for increased diversification of the technology workforce. However, as books like “Brotopia” by Emily Chang and “Race after Technology” by Ruha Benjamin indicate, the culture of tech companies can be misogynistic and racist and therefore unwelcoming to many people. Google’s firing of a well-regarded Black scientist for her research on algorithmic bias in December 2020 suggests that there may be limited capacity within the industry to challenge this culture.

Change may need to start earlier in the workforce development pipeline. Undergraduate education offers a key opportunity for recruiting students from historically underrepresented racial and ethnic, gender, and disability groups into computing. Yet even broadened participation in college computer science courses may not shift the tech workforce and block bias from seeping into tech tools if students aren’t taught that diversity and ethics are essential to their field of study and future careers.

Also mentioned/see:

  • Teaching Responsible Computing Playbook
    The ultimate goal of Teaching Responsible Computing is to educate a new wave of students who bring holistic thinking to the design of technology products. To do this, it is critical for departments to work together across computing, humanistic studies, and more, and collaborate across institutions. This Playbook offers the lessons learned from the process of adapting and enhancing curricula to include responsible computing in a broad set of institutions and help others get started doing the same in their curricula. 
 
© 2024 | Daniel Christian