Surveillance in Schools Associated With Negative Student Outcomes — from techlearning.com by Erik Ofgang
Surveillance at schools is meant to keep students safe but sometimes it can make them feel like suspects instead.

Excerpt:

“We found that schools that rely heavily on metal detectors, random book bag searches, school resource officers, and other methods of surveillance had a negative impact relative to those schools who relied on those technologies least,” says Odis Johnson Jr., the lead author of the study and the Bloomberg Distinguished Professor of Social Policy & STEM Equity at Johns Hopkins.

The researchers also found that Black students are four times more likely to attend a high- versus low-surveillance school, and students who attend high-surveillance schools are more likely to be poor.

 

Americans Need a Bill of Rights for an AI-Powered World — from wired.com by Eric Lander & Alondra Nelson
The White House Office of Science and Technology Policy is developing principles to guard against powerful technologies—with input from the public.

Excerpt (emphasis DSC):

Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.

Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you. 

In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.

Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at ai-equity@ostp.eop.gov

 

 

Justice, Equity, And Fairness: Exploring The Tense Relationship Between Artificial Intelligence And The Law With Joilson Melo — from forbes.com by Annie Brown

Excerpt:

The law and legal practitioners stand to gain a lot from a proper adoption of AI into the legal system. Legal research is one area that AI has already begun to help out with. AI can streamline the thousands of results an internet or directory search would otherwise provide, offering a smaller digestible handful of relevant authorities for legal research. This is already proving helpful and with more targeted machine learning it would only get better.

The possible benefits go on; automated drafts of documents and contracts, document review, and contract analysis are some of those considered imminent.

Many have even considered the possibilities of AI in helping with more administrative functions like the appointment of officers and staff, administration of staff, and making the citizens aware of their legal rights.

A future without AI seems bleak and laborious for most industries including the legal and while we must march on, we must be cautious about our strategies for adoption. This point is better put in the words of Joilson Melo; “The possibilities are endless, but the burden of care is very heavy[…]we must act and evolve with [caution].”

 

In the US, the AI Industry Risks Becoming Winner-Take-Most — from wired.com by Khari Johnson
A new study illustrates just how geographically concentrated AI activity has become.

Excerpt:

A NEW STUDY warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.

“When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and that’s where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter,” Mark Muro, policy director at the Brookings Institution and the study’s coauthor, told WIRED.

Also relevant/see:

 

“Algorithms are opinions embedded in code.”

 

The Fight to Define When AI Is ‘High Risk’ — from wired.com by Khari Johnson
Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.

Excerpt:

The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.

 

From DSC:
Yet another example of the need for the legislative and legal realms to try and catch up here.

The legal realm needs to try and catch up with the exponential pace of technological change

 

Many Americans aren’t aware they’re being tracked with facial recognition while shopping  — from techradar.com by Anthony Spadafora
You’re not just on camera, you’re also being tracked

Excerpt:

Despite consumer opposition to facial recognition, the technology is currently being used in retail stores throughout the US according to new research from Piplsay.

While San Francisco banned the police from using facial recognition back in 2019 and the EU called for a five year ban on the technology last year, several major retailers in the US including Lowe’s, Albertsons and Macy’s have been using it for both fraud and theft detection.

From DSC:
I’m not sure how prevalent this practice is…and that’s precisely the point. We don’t know what all of those cameras are actually doing in our stores, gas stations, supermarkets, etc. I put this in the categories of policy, law schools, legal, government, and others as the legislative and legal realm need to scramble to catch up to this Wild Wild West.

Along these lines, I was watching a portion of 60 minutes last night where they were doing a piece on autonomous trucks (reportedly to hit the roads without a person sometime later this year). When asked about oversight, there was some…but not much.

Readers of this blog will know that I have often wondered…”How does society weigh in on these things?”

Along these same lines, also see:

  • The NYPD Had a Secret Fund for Surveillance Tools — from wired.com by Sidney Fussell
    Documents reveal that police bought facial-recognition software, vans equipped with x-ray machines, and “stingray” cell site simulators—with no public oversight.
 

Employers Tiptoeing into TikTok Hiring: Beware, Attorneys Say — from by news.bloomberglaw.com by Dan Papscun and Paige Smith

Excerpt:

  • The app encourages ‘hyper superficiality’ in hiring
  • Age discrimination also a top-line concern for attorneys

 

 

Google CEO Still Insists AI Revolution Bigger Than Invention of Fire — from gizmodo.com by Matt Novak
Pichai suggests the internet and electricity are also small potatoes compared to AI.

Excerpt:

The artificial intelligence revolution is poised to be more “profound” than the invention of electricity, the internet, and even fire, according to Google CEO Sundar Pichai, who made the comments to BBC media editor Amol Rajan in a podcast interview that first went live on Sunday.

“The progress in artificial intelligence, we are still in very early stages, but I viewed it as the most profound technology that humanity will ever develop and work on, and we have to make sure we do it in a way that we can harness it to society’s benefit,” Pichai said.

“But I expect it to play a foundational role pretty much across every aspect of our lives. You know, be it health care, be it education, be it how we manufacture things and how we consume information. 

 

AI voice actors sound more human than ever —and they’re ready to hire— from technologyreview.com by Karen Hao
A new wave of startups are using deep learning to build synthetic voice actors for digital assistants, video-game characters, and corporate videos.

Excerpt:

The company blog post drips with the enthusiasm of a ’90s US infomercial. WellSaid Labs describes what clients can expect from its “eight new digital voice actors!” Tobin is “energetic and insightful.” Paige is “poised and expressive.” Ava is “polished, self-assured, and professional.”

Each one is based on a real voice actor, whose likeness (with consent) has been preserved using AI. Companies can now license these voices to say whatever they need. They simply feed some text into the voice engine, and out will spool a crisp audio clip of a natural-sounding performance.

But the rise of hyperrealistic fake voices isn’t consequence-free. Human voice actors, in particular, have been left to wonder what this means for their livelihoods.

And below are a couple of somewhat related items:

Amazon’s latest voice interoperability move undermines Google — from protocol.com by Janko Roettgers
With a new toolkit, Amazon is making it easier to build devices that run multiple voice assistants — weakening one of Google’s key arguments against licensing the Google Assistant for such scenarios.

People should be able to pick whatever assistant they prefer for any given task, simply by invoking different words, Rubenson said. “We think it’s critical that customers have choice and flexibility,” he said. “Each will have their own strengths and capabilities.”

Protocol Next Up — from protocol.com by Janko Roettgers
Defining the future of tech and entertainment with Janko Roettgers.

Voice is becoming a true developer ecosystem. Amazon now has more than 900,000 registered Alexa developers, who collectively have built over 130,000 Alexa skills. And those skills are starting to show up in more and more places: “We now have hundreds of physical products” with Alexa built in, Alexa Voice Service & Alexa Skills VP Aaron Rubenson told me this week.

 

Watch a Drone Swarm Fly Through a Fake Forest Without Crashing — from wired.com by Max Levy
Each copter doesn’t just track where the others are. It constantly predicts where they’ll go.

From DSC:
I’m not too crazy about this drone swarm…in fact, the more I thought about it, I find it quite alarming and nerve-racking. It doesn’t take much imagination to think what the militaries of the world are already doing with this kind of thing. And our son is now in the Marines. So forgive me if I’m a bit biased here…but I can’t help but wondering what the role/impact of foot soldiers will be in the next war? I hope we don’t have one. 

Anway, just because we can…

 

The Future of Social Media: Re-Humanisation and Regulation — by Gerd Leonhard

How could social media become ‘human’ again? How can we stop the disinformation, dehumanisation and dataism that has resulted from social media’s algorithmic obsessions? I foresee that the EXTERNALTIES i.e. the consequences of unmitigated growth of exponential digital technologies will become just as big as the consequences of climate change. In fact, today, the social media industry already has quite a few parallels to the oil, gas and coal business: while private make huge profits from extracting the ‘oil’ (i.e. user data), the external damage is left to society and governments to fix. This needs to change! In this keynote I make some precise suggestions as to how that could happen.

Some snapshots/excerpts:

The future of social media -- a video by Gerd Leonhard in the summer of 2021

 

 

 

 


From DSC:
Gerd brings up some solid points here. His presentation and perspectives are not only worth checking out, but they’re worth some time for us to seriously reflect on what he’s saying.

What kind of future do we want?

And for you professors, teachers, instructional designers, trainers, and presenters out there, check out *how* he delivers the content. It’s well done and very engaging.


 

From DSC:
Again, as you can see from the items below…there are various plusses and minuses regarding the use of Artificial Intelligence (AI). Some of the items below are neither positive or negative, but I found them interesting nonetheless.


How Amazon is tackling the A.I. talent crunch — from fortune.com by Jonathan Vanian

Excerpt:

“One way Amazon has adapted to the tight labor market is to require potential new programming hires to take classes in machine learning, said Bratin Saha, a vice president and general manager of machine learning services at Amazon. The company’s executives believe they can teach these developers machine learning basics over a few weeks so that they can work on more cutting-edge projects after they’re hired.”

“These are not formal college courses, and Saha said the recruits aren’t graded like they would be in school. Instead, the courses are intended to give new developers a foundation in machine learning and statistics so they can understand the theoretical underpinnings.”

Machine Learning Can Predict Rapid Kidney Function Decline — from sicklecellanemianews.com by Steve Bryson PhD; with thanks to Sam DeBrule for this resource

Excerpt:

Machine learning tools can identify sickle cell disease (SCD) patients at high risk of progressive kidney disease as early as six months in advance, a study shows.  The study, “Using machine learning to predict rapid decline of kidney function in sickle cell anemia,” was published in the journal eJHaem.

NYPD’s Sprawling Facial Recognition System Now Has More Than 15,000 Cameras — from vice.com by Todd Feathers; with thanks to Sam DeBrule for this resource
The massive camera network is concentrated in predominantly Black and brown neighborhoods, according to a new crowdsourced report.

Excerpt:

The New York City Police Department has built a sprawling facial recognition network that may include more than 15,000 surveillance cameras in Manhattan, Brooklyn, and the Bronx, according to a massive crowdsourced investigation by Amnesty International.

“This sprawling network of cameras can be used by police for invasive facial recognition and risk turning New York into an Orwellian surveillance city,” Matt Mahmoudi, an artificial intelligence and human rights researcher at Amnesty, wrote in the group’s report. “You are never anonymous. Whether you’re attending a protest, walking to a particular neighbourhood, or even just grocery shopping—your face can be tracked by facial recognition technology using imagery from thousands of camera points across New York.”

Related to that article is this one:

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras — from wired.com by Sidney Fussell
Video from the cameras is often used in facial-recognition searches. A report finds they are most common in neighborhoods with large nonwhite populations.

Excerpt:

A NEW VIDEO from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Don’t End Up on This Artificial Intelligence Hall of Shame — from wired.com by Tom Simonite
A list of incidents that caused, or nearly caused, harm aims to prompt developers to think more carefully about the tech they create.

Excerpt:

The AI Incident Database is hosted by Partnership on AI, a nonprofit founded by large tech companies to research the downsides of the technology. The roll of dishonor was started by Sean McGregor, who works as a machine learning engineer at voice processor startup Syntiant. He says it’s needed because AI allows machines to intervene more directly in people’s lives, but the culture of software engineering does not encourage safety.

 

Microsoft President Warns of Orwell’s 1984 ‘Coming to Pass’ in 2024 — from interestingengineering.com by Chris Young
Microsoft’s Brad Smith warned we may be caught up in a losing race with artificial intelligence.

Excerpt (emphasis DSC):

The surveillance-state dystopia portrayed in George Orwell’s 1984 could “come to pass in 2024” if governments don’t do enough to protect the public against artificial intelligence (AI), Microsoft president Brad Smith warned in an interview for the BBC’s investigative documentary series Panorama.

During the interview, Smith warned of China’s increasing AI prowess and the fact that we may be caught up in a losing race with the technology itself.

“If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith stated.

From DSC:
This is a major heads up to all those in the legal/legislative realm — especially the American Bar Association (ABA) and the Bar Associations across the country! The ABA needs to realize they have to up their game and get with the incredibly fast pace of the twenty-first century. If that doesn’t occur, we and future generations will pay the price. Two thoughts come to my mind in regards to the ABA and for the law schools out there:

Step 1: Allow 100% online-based JD programs all the time, from here on out.

Step 2: Encourage massive new program development within all law schools to help future lawyers, judges, legislative reps, & others build up more emerging technology expertise & the ramifications thereof.

Google’s plan to make search more sentient — from vox.com by Rebecca Heilweil
Google announces new search features every year, but this time feels different.

Excerpt:

At the keynote speech of its I/O developer conference on Tuesday, Google revealed a suite of ways the company is moving forward with artificial intelligence. These advancements show Google increasingly trying to build AI-powered tools that seem more sentient and that are better at perceiving how humans actually communicate and think. They seem powerful, too.

Two of the biggest AI announcements from Google involve natural language processing and search.

Google also revealed a number of AI-powered improvements to its Maps platform that are designed to yield more helpful results and directions.

Google’s plans to bring AI to education make its dominance in classrooms more alarming — from fastcompany.com by Ben Williamson
The tech giant has expressed an ambition to transform education with artificial intelligence, raising fresh ethical questions.

Struggling to Get a Job? Artificial Intelligence Could Be the Reason Why — from newsweek.com by Lydia Veljanovski; with thanks to Sam DeBrule for the resource

Excerpt:

Except that isn’t always the case. In many instances, instead of your application being tossed aside by a HR professional, it is actually artificial intelligence that is the barrier to entry. While this isn’t a problem in itself—AI can reduce workflow by rapidly filtering applicants—the issue is that within these systems lies the possibility of bias.

It is illegal in the U.S. for employers to discriminate against a job applicant because of their race, color, sex, religion, disability, national origin, age (40 or older) or genetic information. However, these AI hiring tools are often inadvertently doing just that, and there are no federal laws in the U.S. to stop this from happening.

These Indian edtech companies are shaping the future of AI & robotics — from analyticsinsight.net by Apoorva Komarraju May 25, 2021

Excerpt:

As edtech companies have taken a lead by digitizing education for the modern era, they have taken the stance to set up Atal Tinkering Labs in schools along with other services necessary for the budding ‘kidpreneurs’. With the availability of these services, students can experience 21st-century technologies like IoT, 3D printing, AI, and Robotics.

Researchers develop machine-learning model that accurately predicts diabetes, study says — from ctvnews.ca by Christy Somos

Excerpt:

TORONTO — Canadian researchers have developed a machine-learning model that accurately predicts diabetes in a population using routinely collected health data, a new study says.

The study, published in the JAMA Network Open journal, tested new machine-learning technology on routinely collected health data that examined the entire population of Ontario. The study was run by the ICES not-for-profit data research institute.

Using linked administrative health data from Ontario from 2006 to 2016, researchers created a validated algorithm by training the model on information taken from nearly 1.7 million patients.

Project Guideline: Enabling Those with Low Vision to Run Independently — from ai.googleblog.com by Xuan Yang; with thanks to Sam DeBrule for the resource

Excerpt:

For the 285 million people around the world living with blindness or low vision, exercising independently can be challenging. Earlier this year, we announced Project Guideline, an early-stage research project, developed in partnership with Guiding Eyes for the Blind, that uses machine learning to guide runners through a variety of environments that have been marked with a painted line. Using only a phone running Guideline technology and a pair of headphones, Guiding Eyes for the Blind CEO Thomas Panek was able to run independently for the first time in decades and complete an unassisted 5K in New York City’s Central Park.

Deepfake Maps Could Really Mess With Your Sense of the World — from wired.com by Will Knight
Researchers applied AI techniques to make portions of Seattle look more like Beijing. Such imagery could mislead governments or spread misinformation online.

In a paper published last month, researchers altered satellite images to show buildings in Seattle where there are none.

 
© 2024 | Daniel Christian