Generative A.I. + Law – Background, Applications and Use Cases Including GPT-4 Passes the Bar Exam – Speaker Deck — from speakerdeck.com by Professor Daniel Martin Katz

 

 

 


Also relevant/see:

AI-Powered Virtual Legal Assistants Transform Client Services — from abovethelaw.com by Olga V. Mack
They can respond more succinctly than ever to answer client questions, triage incoming requests, provide details, and trigger automated workflows that ensure lawyers handle legal issues efficiently and effectively.

Artificial Intelligence in Law: How AI Can Reshape the Legal Industry — from jdsupra.com

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 


Gen-AI Movie Trailer For Sci Fi Epic “Genesis” — from forbes.com by Charlie Fink

The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.



Google’s water use is soaring. AI is only going to make it worse. — from businessinsider.com by Hugh Langley

Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.

The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.


We think prompt engineering (learning to converse with an AI) is overrated. — from the Neuron

We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.

From DSC:
Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.


Radar Trends to Watch: August 2023 — from oreilly.com by Mike Loukides
Developments in Programming, Web, Security, and More

Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?


Try out Google ‘TextFX’ and its 10 creative AI tools for rappers, writers — from 9to5google.com by Abner Li; via Barsee – AI Valley 

Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.

Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.

 

Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI
A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.


SEC’s Gensler Warns AI Risks Financial Stability — from bloomberg.com by Lydia Beyoud; via The Brainyacts
SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs


Per a recent Brainyacts posting:

The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.

The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.

The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.


How ChatGPT Code Interpreter (And Four Other AI Initiatives) Might Revolutionize Education — from edtechinsiders.substack.com by Phuong Do, Alex Sarlin, and Sarah Morin
And more on Meta’s Llama, education LLMs, the Supreme Court affirmative action ruling, and Byju’s continued unraveling

Let’s put it all together for emphasis. With Code Interpreter by ChatGPT, you can:

  1. Upload any file
  2. Tell ChatGPT what you want to do with it
  3. Receive your instructions translated into Python
  4. Execute the code
  5. Transform the output back into readable language (or visuals, charts, graphs, tables, etc.)
  6. Provide the results (and the underlying Python code)


AI Tools and Links — from Wally Boston

It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them.  The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.


Meet Claude — A helpful new AI assistant — from wondertools.substack.com by Jeremy Caplan
How to make the most of ChatGPT’s new alternative

Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.

Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.

Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.


The Next Frontier For Large Language Models Is Biology — from forbes.com by Rob Toews

Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.

In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.



Seven AI companies agree to safeguards in the US — from bbc.com by Shiona McCallum; via Tom Barrett

Seven leading companies in artificial intelligence have committed to managing risks posed by the tech, the White House has said.

This will include testing the security of AI, and making the results of those tests public.

Representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI joined US President Joe Biden to make the announcement.

 
 

YouTube tests AI-generated quizzes on educational videos — from techcrunch.com by Lauren Forristal

YouTube tests AI-generated quizzes on educational videos

YouTube is experimenting with AI-generated quizzes on its mobile app for iOS and Android devices, which are designed to help viewers learn more about a subject featured in an educational video. The feature will also help the video-sharing platform get a better understanding of how well each video covers a certain topic.


Incorporating AI in Teaching: Practical Examples for Busy Instructors — from danielstanford.substack.com by Daniel Stanford; with thanks to Derek Bruff on LinkedIn for the resource

Since January 2023, I’ve talked with hundreds of instructors at dozens of institutions about how they might incorporate AI into their teaching. Through these conversations, I’ve noticed a few common issues:

  • Faculty and staff are overwhelmed and burned out. Even those on the cutting edge often feel they’re behind the curve.
  • It’s hard to know where to begin.
  • It can be difficult to find practical examples of AI use that are applicable across a variety of disciplines.

To help address these challenges, I’ve been working on a list of AI-infused learning activities that encourage experimentation in (relatively) small, manageable ways.


September 2023: The Secret Intelligent Beings on Campus — from stefanbauschard.substack.com by Stefan Bauschard
Many of your students this fall will be enhanced by artificial intelligence, even if they don’t look like actual cyborgs. Do you want all of them to be enhanced, or just the highest SES students?


How to report better on artificial intelligence — from cjr.org (Columbia Journalism Review) by Syash Kapoor, Hilke Schellmann, and Ari Sen

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

We—an investigative reporter, a data journalist, and a computer scientist—have firsthand experience investigating AI. We’ve seen the tremendous potential these tools can have—but also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.


AI

.
DSC:
Something I created via Adobe Firefly (Beta version)

 


The 5 reasons L&D is going to embrace ChatGPT — from chieflearningoffice.com by Josh Bersin

Does this mean it will do away with the L&D job? Not at all — these tools give you superhuman powers to find content faster, put it in front of employees in a more useful way and more creatively craft character simulations, assessments, learning in the flow of work and more.

And it’s about time. We really haven’t had a massive innovation in L&D since the early days of the learning experience platform market, so we may be entering the most exciting era in a long time.

Let me give you the five most significant use cases I see. And more will come.


AI and Tech with Scenarios: ID Links 7/11/23 — from christytuckerlearning.com by Christy Tucker

As I read online, I bookmark resources I find interesting and useful. I share these links periodically here on my blog. This post includes links on using tech with scenarios: AI, xAPI, and VR. I’ll also share some other AI tools and links on usability, resume tips for teachers, visual language, and a scenario sample.



It’s only a matter of time before A.I. chatbots are teaching in primary schools — from cnbc.com by Mikaela Cohen

Key Points

  • Microsoft co-founder Bill Gates saying generative AI chatbots can teach kids to read in 18 months rather than years.
  • Artificial intelligence is beginning to prove that it can accelerate the impact teachers have on students and help solve a stubborn teacher shortage.
  • Chatbots backed by large language models can help students, from primary education to certification programs, self-guide through voluminous materials and tailor their education to specific learning styles [preferences].

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

The Future Of Education: Embracing AI For Student Success — from forbes.com by Dr. Michael Horowitz

Unfortunately, too often attention is focused on the problems of AI—that it allows students to cheat and can undermine the value of what teachers bring to the learning equation. This viewpoint ignores the immense possibilities that AI can bring to education and across every industry.

The fact is that students have already embraced this new technology, which is neither a new story nor a surprising one in education. Leaders should accept this and understand that people, not robots, must ultimately create the path forward. It is only by deploying resources, training and policies at every level of our institutions that we can begin to realize the vast potential of what AI can offer.


AI Tools in Education: Doing Less While Learning More — from campustechnology.com by Mary Grush
A Q&A with Mark Frydenberg


Why Students & Teachers Should Get Excited about ChatGPT — from ivypanda.com with thanks to Ruth Kinloch for this resource

Table of Contents for the article at IvyPanda.com entitled Why Students & Teachers Should Get Excited about ChatGPT

Excerpt re: Uses of ChatGPT for Teachers

  • Diverse assignments.
  • Individualized approach.
  • Interesting classes.
  • Debates.
  • Critical thinking.
  • Grammar and vocabulary.
  • Homework review.

SAIL: State of Research: AI & Education — from buttondown.email by George Siemens
Information re: current AI and Learning Labs, education updates, and technology


Why ethical AI requires a future-ready and inclusive education system — from weforum.org


A specter is haunting higher education — from aiandacademia.substack.com by Bryan Alexander
Fall semester after the generative AI revolution

In this post I’d like to explore that apocalyptic model. For reasons of space, I’ll leave off analyzing student cheating motivations or questioning the entire edifice of grade-based assessment. I’ll save potential solutions for another post.

Let’s dive into the practical aspects of teaching to see why Mollick and Bogost foresee such a dire semester ahead.


Items re: Code Interpreter

Code Interpreter continues OpenAI’s long tradition of giving terrible names to things, because it might be most useful for those who do not code at all. It essentially allows the most advanced AI available, GPT-4, to upload and download information, and to write and execute programs for you in a persistent workspace. That allows the AI to do all sorts of things it couldn’t do before, and be useful in ways that were impossible with ChatGPT.

.


Legal items


MISC items


 

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides

Excerpt:

Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.

One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.

 

Work Shift: How AI Might Upend Pay — from bloomberg.com by Jo Constantz

Excerpt:

This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.

“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”

Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.


US Supreme Court rejects computer scientist’s lawsuit over AI-generated inventions — from reuters.com by Blake Brittain

Excerpt:

WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.

The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.


Deep learning pioneer Geoffrey Hinton has quit Google — from technologyreview.com by Will Douglas Heaven
Hinton will be speaking at EmTech Digital on Wednesday.

Excerpt:

Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.

According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.

***


What Is Agent Assist? — from blogs.nvidia.com
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.

Excerpt:

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

From DSC:
Is this type of thing going to provide a learning assistant/agent as well?


A chatbot that asks questions could help you spot when it makes no sense — from technologyreview.com by Melissa Heikkilä
Engaging our critical thinking is one way to stop getting fooled by lying AI.

Excerpt:

AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.

One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.


Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images — from stability.ai

Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images


New AI Powered Denoise in PhotoShop — from jeadigitalmedia.org

In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.


 

 

In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 
 

Justice Through Code — from centerforjustice.columbia.edu by ; via Matt Tower
Unlocking Potential for the 80+ Million Americans with a Conviction History.

Excerpt:

A world where every person, regardless of past convictions or incarceration can access life-sustaining and meaningful careers.

We are working to make this vision a reality through our technical and professional career development accelerators.

Our Mission: We educate and nurture talent with conviction histories to create a more just and diverse workforce. We increase workplace equity through partnerships that educate and prepare teams to create supportive pathways to careers that end the cycle of poverty that contributes to incarceration and recidivism.

JTC is jointly offered by Columbia University’s Center for Justice, and the Tamer Center for Social Enterprise at the Columbia Business School.

 

From DSC:
After seeing this…

…I wondered:

  • Could GPT-4 create the “Choir Practice” app mentioned below?
    (Choir Practice was an idea for an app for people who want to rehearse their parts at home)
  • Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?

This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”

Choir Practice -- an app for people who want to rehearse at home

And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days.  🙂

 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 
© 2024 | Daniel Christian