Stanford’s 2023 AI Index Report — from aiindex.stanford.edu
Also relevant/see:
FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance
Excerpt:
The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.
From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.
You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender
Excerpts:
A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”
…
Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.
Police are rolling out new tech without knowing their effects on people — from The Algorithm by Melissa Heikkilä
Excerpt:
I got lucky—my encounter was with a drone in virtual reality as part of an experiment by a team from University College London and the London School of Economics. They’re studying how people react when meeting police drones, and whether they come away feeling more or less trusting of the police.
It seems obvious that encounters with police drones might not be pleasant. But police departments are adopting these sorts of technologies without even trying to find out.
“Nobody is even asking the question: Is this technology going to do more harm than good?” says Aziz Huq, a law professor at the University of Chicago, who is not involved in the research.
The Justice Gap: The Unmet Civil Legal Needs of Low-income Americans — from the Legal Services Corporation
Legal Services Corporation’s 2022 Justice Gap Report provides a comprehensive look at the differences between the civil legal needs of low-income Americans and the resources available to meet those needs. LSC’s study found that low-income Americans do not get the help they need for 92% of their civil legal problems, even though 74% of low-income households face at least one civil legal issue in a single year.
The consequences that result from a lack of appropriate counsel can be life-altering – low-income Americans facing civil legal problems can lose their homes, children and healthcare, among other things. Help can be hard to access, so LSC is working to bridge this “justice gap” by providing pro bono civil legal aid for those in need. Find out more about LSC’s work to ensure equal justice for all by tuning in to the rest of the Justice Gap video series.
For more information on the Justice Gap, visit https://justicegap.lsc.gov/.
Also relevant/see:
.
“Unleash all this creativity”: Google AI’s breathtaking potential — from axios.com by Jennifer Kingson
Excerpt:
Google’s research arm on Wednesday showed off a whiz-bang assortment of artificial intelligence (AI) projects it’s incubating, aimed at everything from mitigating climate change to helping novelists craft prose.
Why it matters: AI has breathtaking potential to improve and enrich our lives — and comes with hugely worrisome risks of misuse, intrusion and malfeasance, if not developed and deployed responsibly.
Driving the news: The dozen-or-so AI projects that Google Research unfurled at a Manhattan media event are in various stages of development, with goals ranging from societal improvement (such as better health diagnoses) to pure creativity and fun (text-to-image generation that can help you build a 3D image of a skirt-clad monster made of marzipan).
…
The “1,000 Languages Initiative”: Google is building an AI model that will work with the world’s 1,000 most-spoken languages.
- AI “can have immense social benefits” and “unleash all this creativity,” said Marian Croak, head of Google Research’s center of expertise on responsible AI.
- “But because it has such a broad impact on people, the risk involved can also be very huge. And if we don’t get that right … it can be very destructive.”
…
And as Axios’ Scott Rosenberg has written, society is only just beginning to grapple with the legal and ethical questions raised by AI’s new capacity to generate text and images.
11 The Lord detests dishonest scales,
but accurate weights find favor with him.
From DSC:
I thought about this verse the other day as I opened up a brand-new box of cereal. The box made it look like I was getting a lot of cereal — making it look like a decent value for the price. But when I opened it up, it was about half full (I realize some of this occurs by pushing out the box as the contents settle, but come on!). In fairness, I realize that the amount of the cereal is written on the box, but the manufacture likely kept the size of the box the same but decreased the amount that they put within it. They kept the price the same, but changed the quantity sold.
This shrinkification of items seems to be happening more these days — as companies don’t want to change their prices, so they’ll change the amounts of their products that you get.
I hope that we can all build and offer solid products and services — while putting some serious quality into those things. Let’s make those things and offer those services as if we were making them for ourselves and/or our families. Let’s use “accurate weights.” And while we’re trying to do the right things, let’s aim to be in caring relationships with others.
The 5 Biggest Artificial Intelligence (AI) Trends In 2023 — from forbes.com by Bernard Marr
Excerpt:
Today, the technology most commonly used to achieve AI is machine learning – advanced software algorithms designed to carry out one specific task, such as answering questions, translating languages or navigating a journey – and become increasingly good at it as they are exposed to more and more data.
Worldwide, spending by governments and business on AI technology will top $500 billion in 2023, according to IDC research. But how will it be used, and what impact will it have? Here, I outline what I believe will be the most important trends around the use of AI in business and society over the next 12 months.
Also relevant/see:
Futures Literacy: shaping your present by reimagining futures — from futurist.com by Nikolas Badminton and Loes Damhof
We must end ‘productivity paranoia’ on working from home says Microsoft — from inavateonthenet.net
Excerpt:
As part of a survey on hybrid working patterns of more than 20,000 people in 11 countries, Microsoft has called for an end to ‘productivity paranoia’ with 85% of business leaders still saying they find it difficult to have confidence in staff productivity when remote working.
…
“Closing the feedback loop is key to retaining talent. Employees who feel their companies use employee feedback to drive change are more satisfied (90% vs. 69%) and engaged (89% vs. 73%) compared to those who believe their companies don’t drive change. And the employees who don’t think their companies drive change based on feedback? They’re more than twice as likely to consider leaving in the next year (16% vs. 7%) compared to those who do. And it’s not a one-way street. To build trust and participation in feedback systems, leaders should regularly share what they’re hearing, how they’re responding, and why.”
From DSC:
It seems to me that trust and motivation are highly involved here. Trust in one’s employees to do their jobs. And employees who aren’t producing and have low motivation levels should consider changing jobs/industries to find something that’s much more intrinsically motivating to them. Find a cause/organization that’s worth working for.