What Can A.I. Art Teach Us About the Real Thing? — from newyorker.com by Adam Gopnik; with thanks to Mrs. Julie Bender for this resource
The range and ease of pictorial invention offered by A.I. image generation are startling.

Excerpts:

The dall-e 2 system, by setting images free from neat, argumentative intentions, reducing them to responses to “prompts,” reminds us that pictures exist in a different world of meaning from prose.

And the power of images lies less in their arguments than in their ambiguities. That’s why the images that dall-e 2 makes are far more interesting than the texts that A.I. chatbots make. To be persuasive, a text demands a point; in contrast, looking at pictures, we can be fascinated by atmospheres and uncertainties.

One of the things that thinking machines have traditionally done is sharpen our thoughts about our own thinking.

And, so, “A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor”:

A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor
Art work by DALL-E 2 / Courtesy OpenAI

It is, as simple appreciation used to say, almost like being there, almost like her being there. Our means in art are mixed, but our motives are nearly always memorial. We want to keep time from passing and our loves alive. The mechanical collision of kinds first startles our eyes and then softens our hearts. It’s the secret system of art.

 

FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance

Excerpt:

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.

From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.

 

‘ChatGPT Already Outperforms a lot of Junior Lawyers’: An Interview With Richard Susskind — from law.com by Laura Beveridge
For the last 20 years, the U.K. author and academic has been predicting that technology will revolutionise the legal industry. With the buzz around generative AI, will his hypothesis now be proven true?

Excerpts:

For this generation of lawyers, their mission and legacy ought to be to build the systems that replace our old ways of working, he said. Moreover, Susskind identified new work for lawyers, such as legal process analyst or legal data scientist, emerging from technological advancement.

“These are the people who will be building the systems that will be solving people’s legal problems in the future.

“The question I ask is: imagine when the underpinning large language model is GPT 8.5.”

Blue J Legal co-founder Benjamin Alarie on how AI is powering a new generation of legal tech — from canadianlawyermag.com by Tim Wilbur

Excerpts:

We founded Blue J with the idea that we should be able to bring absolute clarity to the law everywhere and on demand. The name that we give to this idea is the legal singularity. I have a book with assistant professor Abdi Aidid called The Legal Singularity coming out soon on this idea.

The book paints the picture of where we think the law will go in the next several decades. Our intuition was not widely shared when we started the book and Blue J.

Since last November, though, many lawyers and journalists have been able to play with ChatGPT and other large language models. They suddenly understand what we have been excited about for the last eight years.

Neat Trick/Tip to Add To Your Bag! — from iltanet.org by Brian Balistreri

Excerpt:

If you need instant transcription of a Audio File, Word Online now allows you to upload a file, and it will transcribe, mark speaker changes, and provide time marks. You can use video files, just make sure they are small or office will kick you out.

Generative AI Is Coming For the Lawyers — from wired.com by Chris Stoken-Walker
Large law firms are using a tool made by OpenAI to research and write legal documents. What could go wrong?

Excerpts:

The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.

“I think it is the beginning of a paradigm shift,” says Wakeling. “I think this technology is very suitable for the legal industry.”

The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.

“Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”

But the problems with current generations of generative AI have already started to show.

 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

Introducing Q-Chat, the world’s first AI tutor built with OpenAI’s ChatGPT — from quizlet.com by Lex Bayer

Excerpt:

Modeled on research demonstrating that the most effective form of learning is one-on-one tutoring1, Q-Chat offers students the experience of interacting with a personal AI tutor in an effective and conversational way. Whether they’re learning French vocabulary or Roman History, Q-Chat engages students with adaptive questions based on relevant study materials delivered through a fun chat experience. Pulling from Quizlet’s massive educational content library and using the question-based Socratic method to promote active learning, Q-Chat has the ability to test a student’s knowledge of educational content, ask in-depth questions to get at underlying concepts, test reading comprehension, help students learn a language and encourage students on healthy learning habits.

Quizlet's Q-Chat -- choose a study prompt to be quizzed on the material, to deepen your understanding or to learn through a story.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

Meet CoCounsel — “the world’s first AI legal assistant” — from casetext.com

Excerpt:

As we shared in our official press release, we’ve been collaborating with OpenAI to build CoCounsel on their latest, most advanced large language model. It was a natural fit between our two teams. OpenAI, the world leader in generative AI, selected Casetext to create a product powered by its technology that was suitable for professional use by lawyers. Our experience leading legal tech since 2013 and applying large language models to the law for over five years made us an ideal choice.

Meet CoCounsel -- the world's first AI legal assistant -- from casetext

From DSC:
I look forward to seeing more vendors and products getting into the legaltech space — ones that use AI and other technologies to make significant progress on the access to justice issues that we have here in the United States.

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 


9 ways ChatGPT will help CIOs — from enterprisersproject.com by Katie Sanders
What are the potential benefits of this popular tool? Experts share how it can help CIOs be more efficient and bring competitive differentiation to their organizations.

Excerpt:

Don’t assume this new technology will replace your job. As Mark Lambert, a senior consultant at netlogx, says, “CIOs shouldn’t view ChatGPT as a replacement for humans but as a new and exciting tool that their IT teams can utilize. From troubleshooting IT issues to creating content for the company’s knowledge base, artificial intelligence can help teams operate more efficiently and effectively.”



Would you let ChatGPT control your smart home? — from theverge.com by

While the promise of an inherently competent, eminently intuitive voice assistant — a flawless butler for your home — is very appealing, I fear the reality could be more Space Odyssey than Downton Abbey. But let’s see if I’m proven wrong.


How ChatGPT Is Being Used To Enhance VR Training — from vrscout.com by Kyle Melnick

Excerpt:

The company claims that its VR training program can be used to prepare users for a wide variety of challenging scenarios, whether you’re a recent college graduate preparing for a difficult job interview or a manager simulating a particularly tough performance review. Users can customize their experiences depending on their role and receive real-time feedback based on their interactions with the AI.


From DSC:
Below are some example topics/articles involving healthcare and AI. 


Role of AI in Healthcare — from doctorsexplain.media
The role of Artificial Intelligence (AI) in healthcare is becoming increasingly important as technology advances. AI has the potential to revolutionize the healthcare industry, from diagnosis and treatment to patient care and management. AI can help healthcare providers make more accurate diagnoses, reduce costs, and improve patient outcomes.

60% of patients uncomfortable with AI in healthcare settings, survey finds — from healthcaredive.com by Hailey Mensik

Dive Brief:

  • About six in 10 U.S. adults said they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center.
  • Some 38% of respondents said using AI in healthcare settings would lead to better health outcomes while 33% said it would make them worse, and 27% said it wouldn’t make much of a difference, the survey found.
  • Ultimately, men, younger people and those with higher education levels were the most open to their providers using AI.

The Rise of the Superclinician – How Voice AI Can Improve the Employee Experience in Healthcare — from medcitynews.com by Tomer Garzberg
Voice AI is the new frontier in healthcare. With its constantly evolving landscape, the healthcare […]

Excerpt:

Voice AI can generate up to 30% higher clinician productivity, by automating these healthcare use cases

  • Updating records
  • Provider duress
  • Platform orchestration
  • Shift management
  • Client data handoff
  • Home healthcare
  • Maintenance
  • Equipment ordering
  • Meal preferences
  • Case data queries
  • Patient schedules
  • Symptom logging
  • Treatment room setup
  • Patient condition education
  • Patient support recommendations
  • Medication advice
  • Incident management
  • … and many more

ChatGPT is poised to upend medical information. For better and worse. — from usatoday.com by Karen Weintraub

Excerpt:

But – and it’s a big “but” – the information these digital assistants provide might be more inaccurate and misleading than basic internet searches.

“I see no potential for it in medicine,” said Emily Bender, a linguistics professor at the University of Washington. By their very design, these large-language technologies are inappropriate sources of medical information, she said.

Others argue that large language models could supplement, though not replace, primary care.

“A human in the loop is still very much needed,” said Katie Link, a machine learning engineer at Hugging Face, a company that develops collaborative machine learning tools.

Link, who specializes in health care and biomedicine, thinks chatbots will be useful in medicine someday, but it isn’t yet ready.

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 

Donald Clark’s recent thoughts regarding how ChatGPT is and will impact the Learning & Development world — from linkedin.com by Donald Clark

Excerpts:

Fascinating chat with three people heading up L&D in a major international company. AI has led them to completely re-evaluate their strategy. Key concepts were performance, process and data. What I liked was their focus on that oft-quoted issue of aligning L&D with the business goals – unlike most, they really meant it.

The technology that puts that in the hands of learners has arrived. Performance support will be a teacher or trainer at your fingertips.

We also talked about prompting, the need to see it as ‘CHAT’gpt, an iterative process, where you need to understand how to speak to the tech. It’s a bit like speaking to an alien from space, as it has no comprehension or consciousness but it is still competent and smart. We have put together 100 prompt tips for learning professionals and taking it out on the road soon. All good in the hood.

Also from Donald Clark, see:

OpenAI releases massive wave of innovation — from donaldclarkplanb.blogspot.com

Excerpt:

With LLMs, OpenAI’s ChatGPT, based on GPT 3.5, started a race where:

  • AI is integrated into mainstream tools like Teams
  • Larger LLMs are being built
  • LLMs are changing ‘search’
  • LLMs are being used on a global scale in real businesses
  • Real businesses are being built on the back of LLMs
  • LLMs as part of ensembes of other tools are being researched to solve accuracy, updatability & provenance issues
  • Open, transparent LLMs (Bloom) are being built
 

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing — from chronicle.com by Ben Chrisinger (behind a paywall)
And there’s little agreement on the rules that should govern it.

Excerpt:

While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.

 

7 ways to think and act strategically in your organisation about AI in learning — from donaldclarkplanb.blogspot.com by Donald Clark

Excerpt:

Above all, you need to see it strategically. There is no imperative to use this tech but there is an imperative to consider its use. Sure, it’s OK to say no but you should have a reason for saying no, as this is the technology of the age. I’ve been saying this in three books, lots of articles and a ton of keynotes for 7 years and it is now happening. This is the new internet, only smarter.

 

Podcast Special: Using Generative AI in Education — from drphilippahardman.substack.com by Dr. Philippa Hardman
An exploration of the risks and benefits of Generative AI in education, in conversation with Mike Palmer

Excerpt:

Among other things, we discussed:

  • The immediate challenges that Generative AI presents for learning designers, educators and students.
  • The benefits & opportunities that Generative AI might offer the world of education, both in terms of productivity and pedagogy.
  • How bringing together the world of AI and the world of learning science, we might revolutionise the way we design and deliver learning experiences.

Speaking of podcasts, this article lists some podcasts to check out for those working in — or interested in — higher education.


Also relevant/see:

 


Also relevant/see:

Are librarians the next prompt engineers? — from linkedin.com by Laura Solomon

Excerpt:

  • Without the right prompt, AI fails to provide what someone might be looking for. This probably is a surprise to no one, especially librarians. If you remember the days before Google, you know exactly how this tended to play out. Google became dominant in large part to its inherent ability to accept natural language queries.
  • A small industry is now popping up to provide people with the correct, detailed prompts to get what they want when interacting with AI. The people doing this work are referred to as “prompt engineers.”
  • Prompt engineers aren’t just people who write queries to be directed to an AI. They also have tend to have a great deal of technical expertise and a deep understanding of how artificial intelligences and natural language can intersect.
  • Prompt engineers don’t work for free.

The above item links to The Most Important Job Skill of This Century — from theatlantic.com by Charlie Warzel
Your work future could depend on how well you can talk to AI. 


Also relevant/see:

My class required AI. Here’s what I’ve learned so far. — from oneusefulthing.substack.com by Ethan Mollick
(Spoiler alert: it has been very successful, but there are some lessons to be learned)

Excerpt:

I fully embraced AI for my classes this semester, requiring students to use AI tools in a number of ways. This policy attracted a lot of interest, and I thought it worthwhile to reflect on how it is going so far. The short answer is: great! But I have learned some early lessons that I think are worth passing on.

AI is everywhere already
Even if I didn’t embrace AI, it is also clear that AI is now everywhere in classes. For example, students used it to help them come up with ideas for class projects, even before I even taught them how to do that. As a result, the projects this semester are much better than previous pre-AI classes. This has led to greater project success rates and more engaged teams. On the downside, I find students also raise their hands to ask questions less. I suspect this might be because, as one of them told me, they can later ask ChatGPT to explain things they didn’t get without needing to speak in front of the class. The world of teaching is now more complicated in ways that are exciting, as well as a bit unnerving.

 

ChatGPT: 30 incredible ways to use the AI-powered chatbot — from interestingengineering.com by Christopher McFadden
You’ve heard of ChatGPT, but do you know how to use it? Or what to use it for? If not, then here are some ideas to get you started.

Excerpts:

  • It’s great at writing CVs and resumes
  • It can also read and improve the existing CV or resume
  • It can help you prepare for a job interview
  • ChatGPT can even do some translation work for you
  • Have it draft you an exam

Chatbots’ Time Has Come. Why Now? — from every.to by Nathan Baschez
Narratives have network effects

Excerpt:

There are obvious questions like “Are the AI’s algorithms good enough?” (probably not yet) and “What will happen to Google?” (nobody knows), but I’d like to take a step back and ask some more fundamental questions: why chat? And why now?

Most people don’t realize that the AI model powering ChatGPT is not all that new. It’s a tweaked version of a foundation model, GPT-3, that launched in June 2020. Many people have built chatbots using it before now. OpenAI even has a guide in its documentation showing exactly how you can use its APIs to make one.

So what happened? The simple narrative is that AI got exponentially more powerful recently, so now a lot of people want to use it. That’s true if you zoom out. But if you zoom in, you start to see that something much more complex and interesting is happening.

This leads me to a surprising hypothesis: perhaps the ChatGPT moment never would have happened without DALL-E 2 and Stable Diffusion happening earlier in the year!


The Most Important Job Skill of This Century — from theatlantic.com by Charlie Warzel
Your work future could depend on how well you can talk to AI.

Excerpt:

Like writing and coding before it, prompt engineering is an emergent form of thinking. It lies somewhere between conversation and query, between programming and prose. It is the one part of this fast-changing, uncertain future that feels distinctly human.


The ChatGPT AI hype cycle is peaking, but even tech skeptics don’t expect a bust — from cnbc.com by Eric Rosenbaum

Key Points:

  • OpenAI’s ChatGPT, with new funding from Microsoft, has grown to over one million users faster than many of dominant tech companies, apps and platforms of the past decade.
  • Unlike the metaverse concept, which had a hype cycle based on an idea still nebulous to many, generative AI as tech’s next big thing is being built on top of decades of existing machine learning already embedded in business processes.
  • We asked top technology officers, specifically reaching out to many at non-tech sector companies, to break down the potential and pitfalls of AI adoption.

ChatGPT and the college curriculum — out at youtube.com by Bryan Alexander with Maria Anderson


AI in EDU: Know the Risks– from linkedin.com by Angela Maiers

AI in EDU -- Know the Risks

 


 
© 2025 | Daniel Christian