You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation — from wsj.com by Berber Jin and Miles Kruppa
Tender offer at that valuation would make OpenAI one of the most valuable U.S. startups

Here’s how Microsoft could use ChatGPT — from The Algorithm by Melissa Heikkilä

Excerpt (emphasis DSC):

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.

And speaking of Microsoft and AI, also see:

I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.

Donald Clark

The Expanding Dark Forest and Generative AI — from maggieappleton.com by
Proving you’re a human on a web flooded with generative AI content

Assumed audience:

People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.

That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.

 

DeepMind CEO Demis Hassabis Urges Caution on AI — from time.com by Billy Perrigo

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.

Demis Hassabis 

Excerpt (emphasis DSC):

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.

AI-assisted plagiarism? ChatGPT bot says it has an answer for that — from theguardian.com by Alex Hern
Silicon Valley firm insists its new text generator, which writes human-sounding essays, can overcome fears over cheating

Excerpt:

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

Schools Shouldn’t Ban Access to ChatGPT — from time.com by Joanne Lipman and Rebecca Distler

Excerpt (emphasis DSC):

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz
Are fears about generative AI warranted?

And see Joshua Kim’s A Friendly Attempt to Balance Steve Mintz’s Piece on Higher Ed Hard Truths out at nsidehighered.com | Comparing the health care and higher ed systems.

 



What Leaders Should Know About Emerging Technologies — from forbes.com by Benjamin Laker

Excerpt (emphasis DSC):

The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,” concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”

I asked ChatGPT to write my cover letters. 2 hiring managers said they would have given me an interview but the letters lacked personality. — from businessinsider.com by Beatrice Nolan

Key points:

  • An updated version of the AI chatbot ChatGPT was recently released to the public.
  • I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
  • The managers said they would’ve given me a call but that the letters lacked personality.

.



 

From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.



Also see:


Also somewhat relevant, see:

 

AI bot ChatGPT stuns academics with essay-writing skills and usability — from theguardian.com by Alex Hern
Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Excerpt:

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

 


Also related/see:


AI and the future of undergraduate writing — from chronicle.com by Beth McMurtrie

Excerpts:

Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?

And what does it mean for professors if the answer to those questions is “yes”?

Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.

“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”

 

 

 

ChatGPT Could Be AI’s iPhone Moment — from bloomberg.com by Vlad Savov; with thanks to Dany DeGrave for his Tweet on this

Excerpt:

The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.

 


And on the legal side of things:


 

Police are rolling out new tech without knowing their effects on people — from The Algorithm by Melissa Heikkilä

Excerpt:

I got lucky—my encounter was with a drone in virtual reality as part of an experiment by a team from University College London and the London School of Economics. They’re studying how people react when meeting police drones, and whether they come away feeling more or less trusting of the police.

It seems obvious that encounters with police drones might not be pleasant. But police departments are adopting these sorts of technologies without even trying to find out.

“Nobody is even asking the question: Is this technology going to do more harm than good?” says Aziz Huq, a law professor at the University of Chicago, who is not involved in the research.

 

How AI will change Education: Part I | Transcend Newsletter #59 — from transcend.substack.com by Alberto Arenaza; with thanks to GSV’s Big 10 for this resource

Excerpt:

You’ve likely been reading for the last few minutes my arguments for why AI is going to change education. You may agree with some points, disagree with others…

Only, those were not my words.

An AI has written every single word in this essay up until here.

The only thing I wrote myself was the first sentence: Artificial Intelligence is going to revolutionize education. The images too, everything was generated by AI.

 

7 Technologies that are Changing Healthcare — from digitalsalutem.com by João Bocas

In this article we are going to talk about the seven technologies that are changing healthcare:

  1. Artificial Intelligence
  2. Blockchain
  3. Virtual Reality
  4. Robots
  5. Mapping technologies
  6. Big Data
  7. Neurotechnology

This startup 3D prints tiny homes from recyclable plastics — from interestingengineering.com by Nergis Firtina; with thanks to Laura Goodrich for this resource

A 3D printed house by Azure

Satellite Billboards Are a Dystopian Future We Don’t Need — from gizmodo.com by George Dvorsky; with thanks to Laura Goodrich for this resource
Brightly lit ads in orbit are technologically and economically viable, say Russian scientists. But can we not?

Artist’s conception of a cubesat ad showing the Olympic rings. Image: Shamil Biktimirov/Skoltech

South Korea to Provide Blockchain-based Digital Identities to Citizens by 2024 — from blockchain.news by Annie Li; with thanks to Laura Goodrich for this resource

Excerpt:

South Korea plans to provide digital identities encrypted by blockchain with smartphones to citizens in 2024 to facilitate its economic development., Bloomberg reported Monday.

The South Korean government stated that with the expansion of the digital economy, the ID embedded in the smartphone is an indispensable emerging technology to support the development of data.

From DSC:
Interesting to see blockchain show up in the first item above on healthcare and also on this item coming out of South Korea for digital identities.

The Bruce Willis Deepfake Is Everyone’s Problem — from wired.com by Will Bedingfield; with thanks to Stephen Downes for this resource
There’s a fight brewing over how Hollywood stars can protect their identities. But it’s not just actors who should be paying attention.

Excerpts:

Yet the question of “who owns Bruce Willis,” as Levy put it, isn’t only a concern for the Hollywood star and his representatives. It concerns actors unions across the world, fighting against contracts that exploit their members’ naivety about AI. And, for some experts, it’s a question that implicates everyone, portending a wilder, dystopian future—one in which identities are bought, sold, and seized.

“This is relevant not just to AI contracts [for synthetic performances], but any contract involving rights to one’s likeness and voice,” says Danielle S. Van Lier, assistant general counsel, intellectual property and contracts at SAG-AFTRA. “We have been seeing contracts that now include ‘simulation rights’ to performers’ images, voices, and performances. These contract terms are buried deep in the boilerplate of performance agreements in traditional media.”


Addendum on 10/26/22:


 

Threats uncovered: QR code exploits offer personal and business risks — from technative.io by Len Noe

Excerpts:

Cyber attackers have quickly caught onto QR codes as a social vulnerability and attacks using them as the vector are on the rise.

It’s clear we intuitively trust QR codes, even though this trust is poorly founded. To get a clearer picture of exactly how QR codes could present a threat, I did some digging. Through research, I discovered a variety of ways QR codes can be used maliciously, to steal not only personal information but provide a solid base of information from which to attack an organisation.

 

This Uncensored AI Art Tool Can Generate Fantasies—and Nightmares — from wired.com by Will Knight
Open source project Stable Diffusion allows anyone to conjure images with algorithms, but some fear it will be used to create unethical horrors.

Excerpt:

Image generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Also relevant/see:

There’s a text-to-image AI art app for Mac now—and it will change everything — from fastcompany.com by Jesus Diaz
Diffusion Bee harnesses the power of the open source text-to-image AI Stable Diffusion, turning it into a one-click Mac App. Brace yourself for a new creativity Big Bang.


Speaking of AI, also see:

 

Radar Trends to Watch: September 2022 Developments in AI, Privacy, Biology, and More — from oreilly.com by Mike Loukides

Excerpt:

It’s hardly news to talk about the AI developments of the last month. DALL-E is increasingly popular, and being used in production. Google has built a robot that incorporates a large language model so that it can respond to verbal requests. And we’ve seen a plausible argument that natural language models can be made to reflect human values, without raising the question of consciousness or sentience.

For the first time in a long time we’re talking about the Internet of Things. We’ve got a lot of robots, and Chicago is attempting to make a “smart city” that doesn’t facilitate surveillance. We’re also seeing a lot in biology. Can we make a real neural network from cultured neurons? The big question for biologists is how long it will take for any of their research to make it out of the lab.

 

You just hired a deepfake. Get ready for the rise of imposter employees. — from protocol.com by Mike Elgan
New technology — plus the pandemic remote work trend — is helping fraudsters use someone else’s identity to get a job.

Excerpt:

Companies have been increasingly complaining to the FBI about prospective employees using real-time deepfake video and deepfake audio for remote interviews, along with personally identifiable information (PII), to land jobs at American companies.

One place they’re likely getting the PII is through posting fake job openings, which enables them to harvest job candidate information, resumes and more, according to the FBI.

The main drivers appear to be money, espionage, access to company systems and unearned career advancement.

 

Forget the Jetsons. Transportation of the future will look more like ‘Westworld’ — from fastcompany.com
Futuristic public transportation projects are already in the works.

Excerpt:

THE NEXT GENERATION
The way we commute has already started to change. With next generation transportation projects, public transportation is becoming more efficient by employing self-driving buses and trains and installing automatic card-ticketing systems.

From DSC:
But we need to look out here. As we’ve seen before, not everything is so rosy with emerging technologies. See this next item for example:

Cruise’s Robot Car Outages Are Jamming Up San Francisco— from wired.com by Aarian Marshall
In a series of incidents, the GM subsidiary lost contact with its autonomous vehicles, leaving them frozen in traffic and trapping human drivers.

“A letter sent anonymously by a Cruise employee to the California Public Utilities Commission that month alleged that the company loses contact with its driverless vehicles ‘with regularity,’ blocking traffic and potentially hindering emergency vehicles.”

 

The Metaverse in 2040 — from pewresearch.org by Janna Anderson and Lee Rainie
Hype? Hope? Hell? Maybe all three. Experts are split about the likely evolution of a truly immersive ‘metaverse.’ They expect that augmented- and mixed-reality enhancements will become more useful in people’s daily lives. Many worry that current online problems may be magnified if Web3 development is led by those who built today’s dominant web platforms

 

The metaverse will, at its core, be a collection of new and extended technologies. It is easy to imagine that both the best and the worst aspects of our online lives will be extended by being able to tap into a more-complete immersive experience, by being inside a digital space instead of looking at one from the outside.

Laurence Lannom, vice president at the Corporation for National Research Initiatives

“Virtual, augmented and mixed reality are the gateway to phenomenal applications in medicine, education, manufacturing, retail, workforce training and more, and it is the gateway to deeply social and immersive interactions – the metaverse.

Elizabeth Hyman, CEO for the XR Association

 


 

The table of contents for the Metaverse in 2040 set of articles out at Pew Research dot org -- June 30, 2022

 


 
 
© 2024 | Daniel Christian