Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence — from bbc.com by  James Clayton

Excerpt:

The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.

Also related to that item, see:
Why artificial intelligence developers say regulation is needed to keep AI in check — from pbs.org

Excerpt:

Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.



Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam
Let me walk you through my current thoughts on where we are, and where we are going.

From DSC:
I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.


Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick
We have more agency over the future of AI than we think.

Excerpt (emphasis DSC):

Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?



Also relevant/see:


That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan
If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.



What Higher Ed Gets Wrong About AI Chatbots — From the Student Perspective — from edsurge.com by Mary Jo Madda (Columnist)

 

ChatGPT scams are the new crypto scams, Meta warns — from engadget.com by Karissa Bell
Meta plans to roll out new “Work Accounts” for businesses to guard against hacks.

Excerpt:

As the buzz around ChatGPT and other generative AI increases, so has scammers’ interest in the tech. In a new report published by Meta, the company says it’s seen a sharp uptick in malware disguised as ChatGPT and similar AI software.

In a statement, the company said that since March of 2023 alone, its researchers have discovered “ten malware families using ChatGPT and other similar themes to compromise accounts across the internet” and that it’s blocked more than 1,000 malicious links from its platform. According to Meta, the scams often involve mobile apps or browser extensions posing as ChatGPT tools. And while in some cases the tools do offer some ChatGPT functionality, their real purpose is to steal their users’ account credentials.

AI Is Reshaping the Battlefield and the Future of Warfare — from bloomberg.com by Jackie Davalos and Nate Lanxon
In this episode of AI IRL, Jackie Davalos and Nate Lanxon talk about one of the most dangerous applications of artificial intelligence: modern warfare

Excerpt:

Artificial intelligence has triggered an arms race with the potential to transform modern-day warfare. Countries are vying to develop cutting-edge technology at record speed, sparking concerns about whether we understand its power before it’s deployed.

From DSC:
I wish that humankind — especially those of us in the United States — would devote less money to warfare and more funding to education.

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 

35 Ways Real People Are Using A.I. Right Now — from nytimes.com by Francesca Paris and Larry Buchanan

From DSC:
It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may find Elicit, Scholarcy and Scite to be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things.
.

Snapshot of a query result from a tool called Elicit


 

There Is No A.I. — from newyorker.com by Jaron Lanier
There are ways of controlling the new technology—but first we have to stop mythologizing it.

Excerpts:

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

 


 

Resource per Steve Nouri on LinkedIn


 
 

The mounting human and environmental costs of generative AI — from by Sasha Luccioni
Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.

Abstract image of a person wearing a gas mask, reaching out to a floating brain

Excerpt:

Over the past few months, the field of artificial intelligence has seen rapid growth, with wave after wave of new models like Dall-E and GPT-4 emerging one after another. Every week brings the promise of new and exciting models, products, and tools. It’s easy to get swept up in the waves of hype, but these shiny capabilities come at a real cost to society and the planet.

Downsides include the environmental toll of mining rare minerals, the human costs of the labor-intensive process of data annotation, and the escalating financial investment required to train AI models as they incorporate more parameters.

Let’s look at the innovations that have fueled recent generations of these models—and raised their associated costs.


Also relevant/see:

ChatGPT is Thirsty: a mini exec briefing — from BrainyActs #040 | ChatGPT’s Thirst Problem
Between the growing number of data centers + the exponential growth in consumer-grade Generative AI tools, water is becoming a scarce resource.

 
 


Also relevant/see:


Learning Designers will have to adapt or die. 10 ways to UPSKILL to AI…. — from donaldclarkplanb.blogspot.com by Donald Clark

Learning Designers need to upskill


From Ethan Mollick on LinkedIn:

Take a look at this simulated negotiation, with grading and feedback. Prompt: “I want to do deliberate practice about how to conduct negotiations. You will be my negotiation teacher. You will simulate a detailed scenario in which I have to engage in a negotiation. You will fill the role of one party, I will fill the role of the other. You will ask for my response to in each step of the scenario and wait until you receive it. After getting my response, you will give me details of what the other party does and says. You will grade my response and give me detailed feedback about what to do better using the science of negotiation. You will give me a harder scenario if I do well, and an easier one if I fail.”

Samples from Bing Creative mode and ChatGPT-4 (3.5, the free version, does not work as well)


I’m having a blast with ChatGPT – it’s testing ME! — from by Mark Mrohs
Using ChatGPT as an agent for asynchronous active learning

I have been experimenting with possible ways to incorporate interactions with ChatGPT into instruction. And I’m blown away. I want to show you some of what I’ve come up with.

 

 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 

Working To Incorporate Legal Technology Into Your Practice Isn’t Just A Great Business Move – It’s Required — from abovethelaw.com by Chris Williams

Excerpt:

According to Model Rule 1.1 of the ABA Model Rules of Professional Conduct: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

In 2012, the ABA House of Delegates voted to amend Comment 8 to Model Rule 1.1 to include explicit guidance on lawyers’ use of technology.

If Model Rule 1.1 isn’t enough of a motivator to dip your feet in legal tech, maybe paying off that mortgage is. As an extra bit of motivation, it may benefit you to pin the ABA House of Delegate’s call to action on your motivation board.

Also relevant/see:

While courts still use fax machines, law firms are using AI to tailor arguments for judges — from cbc.ca by Robyn Schleihauf

Excerpt (emphasis DSC):

What is different with AI is the scale by which this knowledge is aggregated. While a lawyer who has been before a judge three or four times may have formed some opinions about them, these opinions are based on anecdotal evidence. AI can read the judge’s entire history of decision-making and spit out an argument based on what it finds. 

The common law has always used precedents, but what is being used here is different — it’s figuring out how a judge likes an argument to be framed, what language they like using, and feeding it back to them.

And because the legal system builds on itself — with judges using prior cases to determine how a decision should be made in the case before them — these AI-assisted arguments from lawyers could have the effect of further entrenching a judge’s biases in the case law, as the judge’s words are repeated verbatim in more and more decisions. This is particularly true if judges are unaware of their own biases.

Cutting through the noise: The impact of GPT/large language models (and what it means for legal tech vendors) — from legaltechnology.com by Caroline Hill

Excerpts:

Given that we have spent time over the past few years telling people not to get to overestimate the capability of AI, is this the real deal?

“Yeah, I think it’s the real thing because if you look at why legal technologies have not had the adoption rate historically, language has always been the problem,” Katz said. “Language has been hard for machines historically to work with, and language is core to law. Every road leads to a document, essentially.”

Katz says: “There are two types of things here. They would call general models GPT one through four, and then there’s domain models, so a legal specific large language model.

“What we’re going to see are large-ish, albeit not the largest model that’s heavily domain tailored is going to beat a general model in the same way that a really smart person can’t beat a super specialist. That’s where the value creation and the next generation of legal technology is going to live.”

Fresh Voices in Legal Tech with Kristen Sonday — from legaltalknetwork.com by Dennis Kennedy and Tom Mighell with Kristen Sonday

In a brand new interview series, Dennis and Tom welcome Kristen Sonday to hear her perspectives on the latest developments in the legal tech world.

 

FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance

Excerpt:

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.

From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.

 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 
© 2024 | Daniel Christian