Let’s Teach Computer Science Majors to Be Good Citizens. The Whole World Depends on It. — from edsurge.com by Anne-Marie Núñez, Matthew J. Mayhew, Musbah Shaheen and Laura S. Dahl

Excerpt:

To mitigate the perpetuation of these and related inequities, observers have called for increased diversification of the technology workforce. However, as books like “Brotopia” by Emily Chang and “Race after Technology” by Ruha Benjamin indicate, the culture of tech companies can be misogynistic and racist and therefore unwelcoming to many people. Google’s firing of a well-regarded Black scientist for her research on algorithmic bias in December 2020 suggests that there may be limited capacity within the industry to challenge this culture.

Change may need to start earlier in the workforce development pipeline. Undergraduate education offers a key opportunity for recruiting students from historically underrepresented racial and ethnic, gender, and disability groups into computing. Yet even broadened participation in college computer science courses may not shift the tech workforce and block bias from seeping into tech tools if students aren’t taught that diversity and ethics are essential to their field of study and future careers.

Also mentioned/see:

  • Teaching Responsible Computing Playbook
    The ultimate goal of Teaching Responsible Computing is to educate a new wave of students who bring holistic thinking to the design of technology products. To do this, it is critical for departments to work together across computing, humanistic studies, and more, and collaborate across institutions. This Playbook offers the lessons learned from the process of adapting and enhancing curricula to include responsible computing in a broad set of institutions and help others get started doing the same in their curricula. 
 

This is an abstract picture of a person's head made of connections peering sideways -- it links to Artificial intelligence and the future of national security from ASU

Artificial intelligence and the future of national security — from news.asu.edu

Excerpt:

Artificial intelligence is a “world-altering” technology that represents “the most powerful tools in generations for expanding knowledge, increasing prosperity and enriching the human experience” and will be a source of enormous power for the companies and countries that harness them, according to the recently released Final Report of the National Security Commission on Artificial Intelligence.

This is not hyperbole or a fantastical version of AI’s potential impact. This is the assessment of a group of leading technologists and national security professionals charged with offering recommendations to Congress on how to ensure American leadership in AI for national security and defense. Concerningly, the group concluded that the U.S. is not currently prepared to defend American interests or compete in the era of AI.

Also see:

EU Set to Ban Surveillance, Start Fines Under New AI Rules — from bloomberg.com by Natalia Drozdiak

Excerpt:

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications.

Also see:

Wrongfully arrested man sues Detroit police over false facial recognition match — from washingtonpost.com by Drew Harwell
The case could fuel criticism of police investigators’ use of a controversial technology that has been shown to perform worse on people of color

Excerpts:

A Michigan man has sued Detroit police after he was wrongfully arrested and falsely identified as a shoplifting suspect by the department’s facial recognition software in one of the first lawsuits of its kind to call into question the controversial technology’s risk of throwing innocent people in jail.

Robert Williams, a 43-year-old father in the Detroit suburb of Farmington Hills, was arrested last year on charges he’d taken watches from a Shinola store after police investigators used a facial recognition search of the store’s surveillance-camera footage that identified him as the thief.

Prosecutors dropped the case less than two weeks later, arguing that officers had relied on insufficient evidence. Police Chief James Craig later apologized for what he called “shoddy” investigative work. Williams, who said he had been driving home from work when the 2018 theft had occurred, was interrogated by detectives and held in custody for 30 hours before his release.

Williams’s attorneys did not make him available for comment Tuesday. But Williams wrote in The Washington Post last year that the episode had left him deeply shaken, in part because his young daughters had watched him get handcuffed in his driveway and put into a police car after returning home from work.

“How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?” he wrote. “As any other black man would be, I had to consider what could happen if I asked too many questions or displayed my anger openly — even though I knew I had done nothing wrong.”

Addendum on 4/20/21:

 

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud — from vice.com by Gabriel Geiger; with thanks to Sam DeBrule for this resource
Dutch tax authorities used algorithms to automate an austere and punitive war on low-level fraud—the results were catastrophic.

Excerpt:

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

On a more positive note, Sam DeBrule (in his Machine Learnings e-newsletter) also notes the following article:

Can artificial intelligence combat wildfires? Sonoma County tests new technology — from latimes.com by Alex Wigglesworth

 

From DSC:
The items below are from Sam DeBrule’s Machine Learnings e-Newsletter.


By clicking this image, you will go to Sam DeBrule's Machine Learning e-Newsletter -- which deals with all topics regarding Artificial Intelligence

#Awesome

“Sonoma County is adding artificial intelligence to its wildfire-fighting arsenal. The county has entered into an agreement with the South Korean firm Alchera to outfit its network of fire-spotting cameras with software that detects wildfire activity and then alerts authorities. The technology sifts through past and current images of terrain and searches for certain changes, such as flames burning in darkness, or a smoky haze obscuring a tree-lined hillside, according to Chris Godley, the county’s director of emergency management…The software will use feedback from humans to refine its algorithm and will eventually be able to detect fires on its own — or at least that’s what county officials hope.” – Alex Wigglesworth Learn More from Los Angeles Times >

#Not Awesome

Hacked Surveillance Camera Firm Shows Staggering Scale of Facial Recognition — from
A hacked customer list shows that facial recognition company Verkada is deployed in tens of thousands of schools, bars, stores, jails, and other businesses around the country.

Excerpt:

Hackers have broken into Verkada, a popular surveillance and facial recognition camera company, and managed to access live feeds of thousands of cameras across the world, as well as siphon a Verkada customer list. The breach shows the astonishing reach of facial recognition-enabled cameras in ordinary workplaces, bars, parking lots, schools, stores, and more.

The staggering list includes K-12 schools, seemingly private residences marked as “condos,” shopping malls, credit unions, multiple universities across America and Canada, pharmaceutical companies, marketing agencies, pubs and bars, breweries, a Salvation Army center, churches, the Professional Golfers Association, museums, a newspaper’s office, airports, and more.

 

The Chegg situation is worse than you think — from eliterate.us by Michael Feldstein

Excerpts:

Forbes just ran with an article entitled “This $12 Billion Company Is Getting Rich Off Students Cheating Their Way Through Covid“.

Ouch.

Chegg -- This $12 Billion Company Is Getting Rich Off Students Cheating Their Way Through Covid

[Per Michael] To sum up:

  • Publishers, after selling expensive textbooks to students, sold the answers to the homework questions in those expensive books to Chegg.
  • Chegg sells the answers to the questions to the students, who often use them to cheat.
  • To combat this problem, universities pay for proctoring software, which is apparently more effective at preventing students from going to the bathroom than it is at preventing cheating.
  • To add insult to all of this injury, “to chegg” is now apparently a verb in the English language. We will all have to live with that linguistic violence.

Addendum on 2/9/21:

 

Could 2021 Be The Year Of Civil Justice Reform? — from law360.com by Cara Bayles

Excerpt:

Now, any tenant in Boulder, regardless of means, can get an attorney to represent them in housing court for free. While a handful of other cities like New York and San Francisco offer similar programs, there is no universal right to an attorney in eviction cases.

Justice advocates are hoping that could change. COVID-19, they say, has drawn attention to access to justice issues that have plagued civil proceedings for years. They hope the tragedies of 2020 can fuel reform in 2021.

Six months into the pandemic, millions of Americans had fallen behind on rent. One statistic that civil justice advocates have known for years became apparent — that while 90% of landlords have legal counsel in eviction proceedings, only 10% of tenants do.

 

 

From DSC:
An interesting, more positive use of AI here:

Deepdub uses AI to dub movies in the voice of famous actors — from protocol.com by Janko Roettgers
Fresh out of stealth, the startup is using artificial intelligence to automate the localization process for global streaming.

Excerpt:

Tel Aviv-based startup Deepdub wants to help streaming services accelerate this kind of international rollout by using artificial intelligence for their localization needs. Deepdub, which came out of stealth on Wednesday, has built technology that can translate a voice track to a different language, all while staying true to the voice of the talent. This makes it possible to have someone like Morgan Freeman narrate a movie in French, Italian or Russian without losing what makes Freeman’s voice special and recognizable.

From DSC:
A much more negative use of AI here:

A much more negative use of AI here...

 

 

DC: This is not right! When will there be justice?! We need to help Native American colleges, universities, & communities out!

The Digital Divide for Tribal College Students — COVID, CARES Act, and Critical Next Steps — from diverseeducation.com

Excerpt:

In this episode staff writer Sara Weissman shares a story that focuses on the digital divide for Native Americans by bringing in voices of tribal college leaders and their students during the COVID 19 pandemic.

Many don’t know but Native American colleges and universities have long struggled with the worst internet connectivity in the nation while ironically paying the highest rates for service. Hear first-hand how students from Diné College and other institutions are currently affected. Carrie Billie (Big Water Clan), President & CEO of the American Indian Higher Education Consortium (AIHEC) and Dr. Cynthia Lindquist (Star Horse Woman), President of Cankdeska Cikana Community College in North Dakota, break down the data and lay out critical next steps necessary to address the digital divide.

Many don’t know but Native American colleges and universities have long struggled with the worst internet connectivity in the nation while ironically paying the highest rates for service.

From DSC:
When will there be justice!? Let’s join in and make amends and provide the funding, concrete assistance, products, and services to Native American colleges, universities, and communities. Some potential ideas:

  • For the short term, could there be Loon balloons deployed immediately to provide free and stronger access to the Internet?

Could Project Loon assist Native American colleges, universities, and communities?

  • Could our Federal Government make amends and do the right thing here? (e-rate program, put Internet access in, make policy changes, offer more grants, other?)
  • Could Silicon Valley assist with hardware and software? For example:
    • Can Apple, HP, Microsoft, and others donate hardware and software?
    • Can Zoom, Adobe, Cisco Webex, Microsoft Teams, and others donate whatever these communities need to provide videoconferencing licenses?
  • Could telecom providers provide free internet access?
  • Could MOOCs offer more free courses?
  • Could furniture makers such as Steelcase, Herman Miller, and others donate furniture and help establish connected learning spaces?
  • How might faculty members and staff within higher education contribute?
  • How could churches, synagogues, and such get involved?
  • Could the rest of us locate and donate to charities that aim to provide concrete assistance to Native American schools, colleges, universities, and communities?

We need to do the right thing here. This is another area* where our nation can do much better.

* Here’s another example/area where we can do much better and make amends/changes.

 


Addendum on 12/7/20:

 

Pushback is growing against automated proctoring services. But so is their use — from edsurge.com by Jeffrey R. Young

Excerpt:

Many students have pushed back, arguing that remote proctoring tools result in a serious invasion of privacy and create stress that can hinder academic performance. More than 60,000 students across the U.S. have signed petitions calling on their colleges to stop using automated proctoring tools, meaning that the technology has become arguably the most controversial tool of the pandemic at colleges.

From DSC:
We have an issue oftentimes within higher education — including graduate schools/professional schools as well — where the student and the professor aren’t always on the same team (or at least that’s the percaption). To me, the professors need to be saying (and living out the message that), “We ARE on your team. We are working to help make you successful in the future that you have chosen for yourself. We’re here to help you…not monitor you.”

It’s like I feel when I walk into so many public places these days (and even out on the roadways as well). When I walk into a store, it’s like the security cameras are whispering to me and to others…”We don’t trust you. Some of you have stolen in the past. so we’re going to carefully watch every single one of you. And we aren’t just going to watch you, we’re going to record you as well.”

The message? We don’t trust you.

This severely hampers the relationships involved.

And I’m sure that cheating is going on. But then, that makes me think that perhaps it’s time to change the way we assess students — and to help them see assessments as opportunities to learn, not to cheat. 

Lower the stakes. Offer tests more frequently. Provide more opportunities to practice recall. And be on their team.

 

From DSC:
As the technologies get more powerful, so do the ramifications/consequences. And here’s the kicker. Just like we’re seeing with trying to deal with systemic racism in our country…like it or not, it all comes back to the state of the hearts and minds out there. I’m not trying to be preachy or to pretend that I’m better than anyone. I’m a sinner. I know that full well. And I view all of us as equals trying to survive and to make our way in this tough-to-live-in world. But the unfortunate truth is that technologies are tools, and how the tools are used depends upon one’s heart.

Proverbs 4:23 New International Version (NIV) — from biblegateway.com

23 Above all else, guard your heart,
    for everything you do flows from it.

 

This startup is using AI to give workers a “productivity score” — from technologyreview.com by Will Douglas
Enaible is one of a number of new firms that are giving employers tools to help keep tabs on their employees—but critics fear this kind of surveillance undermines trust.

Excerpt:

In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.

Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.”

Machine-learning algorithms also encode hidden bias in the data they are trained on. Such bias is even harder to expose when it’s buried inside an automated system. If these algorithms are used to assess an employee’s performance, it can be hard to appeal an unfair review or dismissal. 

 

 

From DSC:
I can’t help but reflect on how slippery the slope is when we start talking about using drones — especially as sponsored and used by governments, including our government here in the U.S. Consider the following from The Future Institute.

The Future Institute Today -- discussing the slippery slope of using drones

Excerpt:

Eyes in the sky
As nationwide racial justice protests continue, some journalists and protestors have noticed a new addition to the armed police officers and National Guard troops: a drone flying a hexagon-shaped route 20,000 feet above the streets in Minneapolis. The drone, flown by U.S. Customs and Border Protection, is called a Predator, and is a piece of military technology used for identifying and targeting terrorists overseas. Lately, it’s become a more common sight domestically.

Last month, a number of New Yorkers witnessed a drone floating above them, barking orders to follow social distancing guidelines. The mysterious drone wasn’t official police equipment, but rather a privately owned device piloted by a man named Xavier Arthur in Queens, who was frustrated that people weren’t following stay-at-home orders. He claimed to represent the “Anti-Covid-19 Volunteer Drone Task Force. 

It’s not an isolated incident. During the outbreak, drones have been used extensively to monitor residents and encourage them to stay indoors, to inspect traffic stops and hospitals, and to spray cities with disinfectants. In Paris and Mumbai, they’re patrolling social distancing violators. In China, a video clip went viral, showing a drone breaking up a mahjong game—residents had defied local orders that they stay indoors. Drones with infrared cameras also allegedly flew overhead and checked for people with fevers.

Advanced drones can pinpoint certain behaviors in crowds from high altitudes, recognize and automatically follow targets, and communicate with each other or to command centers on the ground with remarkable precision and low latency. The pandemic and protests are playing to the strengths of an emerging real-time aerial surveillance ecosystem.

3 Things You Should Know

  1. The Flying Internet of Things is taking off.
  2. New drones can self-destruct.
  3. Shareable drones may drive growth in the industry.
 

NYC classrooms cancel Zoom after trolls make ‘Zoombombing’ a thing — from thenextweb.com by Bryan Clark

From DSC:
I’m a sinner. I know that all too well. But I’m tagging this with ethics, morals, and matters of the heart because I don’t have a tag for “I’m bumbed out that a solid technology gets ruined by a few bad apples.” It’s not the technology’s fault, per se. It has everything to do with the human heart….whether we like to admit/recognize it or not.

Also see:

“Zoombombing problems are more likely to occur for online events for which the link is circulated widely, or even posted on a public website.” (source)
 

Why education is a ‘wicked problem’ for learning engineers to solve — from edsurge.com by Rebecca Koenig

Excerpts (emphasis DSC):

So, back to the wicked problem: How do we make education that’s both quality education and at the same time accessible and affordable?

“Now, we are building a new technology that we call Agent Smith. It’s another AI technology— and we’re very excited about it—that builds [a] Jill Watson for you. And Agent Smith can build a Jill Watson for you in less than 10 percent of the hours.”

So one question for online education is, can we build a new set of tools—and I think that’s where AI is going to go, that learning engineering is going to go—where AI is not helping individual humans as much as AI is helping human-human interaction.

Huge ethical issues and something that learning engineering has not yet started focusing on in a serious manner. We are still in a phase of, “Look ma, no hands, I can ride a bike without hands.”

Technology should not be left to technologists.

Learning from the living class room

 

From DSC:
Normally I don’t advertise or “plug” where I work, but as our society is in serious need of increasing Access to Justice (#A2J) & promoting diversity w/in the legal realm, I need to post this video.

 
© 2024 | Daniel Christian