ChatGPT as a teaching tool, not a cheating tool — from timeshighereducation.com by Jennifer Rose
How to use ChatGPT as a tool to spur students’ inner feedback and thus aid their learning and skills development

Excerpt:

Use ChatGPT to spur student’s inner feedback
One way that ChatGPT answers can be used in class is by asking students to compare what they have written with a ChatGPT answer. This draws on David Nicol’s work on making inner feedback explicit and using comparative judgement. His work demonstrates that in writing down answers to comparative questions students can produce high-quality feedback for themselves which is instant and actionable. Applying this to a ChatGPT answer, the following questions could be used:

  • Which is better, the ChatGPT response or yours? Why?
  • What two points can you learn from the ChatGPT response that will help you improve your work?
  • What can you add from your answer to improve the ChatGPT answer?
  • How could the assignment question set be improved to allow the student to demonstrate higher-order skills such as critical thinking?
  • How can you use what you have learned to stay ahead of AI and produce higher-quality work than ChatGPT?
 

…which links to openai.com/research/gpt-4


Also relevant/see:

See the recording from the GPT-4 Developer Demo

See the recording from the GPT-4 Developer Demo

About GPT-4
GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and advanced reasoning capabilities.

You can learn more through:

  • Overview page of GPT-4 and what early customers have built on top of the model.
  • Blog post with details on the model’s capabilities and limitations, including eval results.

From DSC:
I do hope that the people building all of this are taking enough time to ask, “What might humans do with these emerging technologies — both positively AND negatively?” And then put some guard rails around things.


Also relevant/see:

 

Exploring generative AI and the implications for universities — from universityworldnews.com

Excerpt:

This is part of a weekly University World News special report series on ‘AI and higher education’. The focus is on how universities are engaging with ChatGPT and other generative artificial intelligence tools. The articles from academics and our journalists around the world are exploring developments and university work in AI that have implications for higher education institutions and systems, students and staff, and teaching, learning and research.

AI and higher education -- a report from University World News

 

The Librarian: Can we prompt ChatGPT to generate reliable references? — from drphilippahardman.substack.com by Dr. Philippa Hardman

Lessons Learned

  • Always assume that ChatGPT is wrong until you prove otherwise.
  • Validate everything (and require your students to validate everything too).
  • Google Scholar is a great tool for validating ChatGPT outputs rapidly.
  • The prompt works better when you provide a subject area, e.g. visual anthropology, and then a sub-topic, e.g. film making.
  • Ignore ChatGPT’s links – validate by searching for titles & authors, not URLs.
  • Use intentional repetition, e.g. of Google Scholar, to focus ChatGPT’s attention.
  • Be aware: ChatGPT’s outputs end at 2021. You need to fill in the blanks since then.
 

ChatGPT is Everywhere — from chronicle.com by Beth McMurtrie
Love it or hate it, academics can’t ignore the already pervasive technology.

Excerpt:

Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

“Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.”

Sarah Eaton, associate professor of education at the University of Calgary



Artificial intelligence and academic integrity, post-plagiarism — from universityworldnews.com Sarah Elaine Eaton; with thanks to Robert Gibson out on LinkedIn for the resource

Excerpt:

The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.

Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.

When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.

There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.

From DSC:
These couple of sentences…

In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.

…reminded me of what’s been happening within the filmmaking world for years (i.e., such as in Star Wars, Jurrasic Park, and many others). It’s often hard to tell what’s real and what’s been generated by a computer.
 

What Can A.I. Art Teach Us About the Real Thing? — from newyorker.com by Adam Gopnik; with thanks to Mrs. Julie Bender for this resource
The range and ease of pictorial invention offered by A.I. image generation are startling.

Excerpts:

The dall-e 2 system, by setting images free from neat, argumentative intentions, reducing them to responses to “prompts,” reminds us that pictures exist in a different world of meaning from prose.

And the power of images lies less in their arguments than in their ambiguities. That’s why the images that dall-e 2 makes are far more interesting than the texts that A.I. chatbots make. To be persuasive, a text demands a point; in contrast, looking at pictures, we can be fascinated by atmospheres and uncertainties.

One of the things that thinking machines have traditionally done is sharpen our thoughts about our own thinking.

And, so, “A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor”:

A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor
Art work by DALL-E 2 / Courtesy OpenAI

It is, as simple appreciation used to say, almost like being there, almost like her being there. Our means in art are mixed, but our motives are nearly always memorial. We want to keep time from passing and our loves alive. The mechanical collision of kinds first startles our eyes and then softens our hearts. It’s the secret system of art.

 

FBI, Pentagon helped research facial recognition for street cameras, drones — from washingtonpost.com by Drew Harwell
Internal documents released in response to a lawsuit show the government was deeply involved in pushing for face-scanning technology that could be used for mass surveillance

Excerpt:

The FBI and the Defense Department were actively involved in research and development of facial recognition software that they hoped could be used to identify people from video footage captured by street cameras and flying drones, according to thousands of pages of internal documents that provide new details about the government’s ambitions to build out a powerful tool for advanced surveillance.

From DSC:
This doesn’t surprise me. But it’s yet another example of opaqueness involving technology. And who knows to what levels our Department of Defense has taken things with AI, drones, and robotics.

 

‘ChatGPT Already Outperforms a lot of Junior Lawyers’: An Interview With Richard Susskind — from law.com by Laura Beveridge
For the last 20 years, the U.K. author and academic has been predicting that technology will revolutionise the legal industry. With the buzz around generative AI, will his hypothesis now be proven true?

Excerpts:

For this generation of lawyers, their mission and legacy ought to be to build the systems that replace our old ways of working, he said. Moreover, Susskind identified new work for lawyers, such as legal process analyst or legal data scientist, emerging from technological advancement.

“These are the people who will be building the systems that will be solving people’s legal problems in the future.

“The question I ask is: imagine when the underpinning large language model is GPT 8.5.”

Blue J Legal co-founder Benjamin Alarie on how AI is powering a new generation of legal tech — from canadianlawyermag.com by Tim Wilbur

Excerpts:

We founded Blue J with the idea that we should be able to bring absolute clarity to the law everywhere and on demand. The name that we give to this idea is the legal singularity. I have a book with assistant professor Abdi Aidid called The Legal Singularity coming out soon on this idea.

The book paints the picture of where we think the law will go in the next several decades. Our intuition was not widely shared when we started the book and Blue J.

Since last November, though, many lawyers and journalists have been able to play with ChatGPT and other large language models. They suddenly understand what we have been excited about for the last eight years.

Neat Trick/Tip to Add To Your Bag! — from iltanet.org by Brian Balistreri

Excerpt:

If you need instant transcription of a Audio File, Word Online now allows you to upload a file, and it will transcribe, mark speaker changes, and provide time marks. You can use video files, just make sure they are small or office will kick you out.

Generative AI Is Coming For the Lawyers — from wired.com by Chris Stoken-Walker
Large law firms are using a tool made by OpenAI to research and write legal documents. What could go wrong?

Excerpts:

The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.

“I think it is the beginning of a paradigm shift,” says Wakeling. “I think this technology is very suitable for the legal industry.”

The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.

“Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”

But the problems with current generations of generative AI have already started to show.

 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

Introducing Q-Chat, the world’s first AI tutor built with OpenAI’s ChatGPT — from quizlet.com by Lex Bayer

Excerpt:

Modeled on research demonstrating that the most effective form of learning is one-on-one tutoring1, Q-Chat offers students the experience of interacting with a personal AI tutor in an effective and conversational way. Whether they’re learning French vocabulary or Roman History, Q-Chat engages students with adaptive questions based on relevant study materials delivered through a fun chat experience. Pulling from Quizlet’s massive educational content library and using the question-based Socratic method to promote active learning, Q-Chat has the ability to test a student’s knowledge of educational content, ask in-depth questions to get at underlying concepts, test reading comprehension, help students learn a language and encourage students on healthy learning habits.

Quizlet's Q-Chat -- choose a study prompt to be quizzed on the material, to deepen your understanding or to learn through a story.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

Meet CoCounsel — “the world’s first AI legal assistant” — from casetext.com

Excerpt:

As we shared in our official press release, we’ve been collaborating with OpenAI to build CoCounsel on their latest, most advanced large language model. It was a natural fit between our two teams. OpenAI, the world leader in generative AI, selected Casetext to create a product powered by its technology that was suitable for professional use by lawyers. Our experience leading legal tech since 2013 and applying large language models to the law for over five years made us an ideal choice.

Meet CoCounsel -- the world's first AI legal assistant -- from casetext

From DSC:
I look forward to seeing more vendors and products getting into the legaltech space — ones that use AI and other technologies to make significant progress on the access to justice issues that we have here in the United States.

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 


9 ways ChatGPT will help CIOs — from enterprisersproject.com by Katie Sanders
What are the potential benefits of this popular tool? Experts share how it can help CIOs be more efficient and bring competitive differentiation to their organizations.

Excerpt:

Don’t assume this new technology will replace your job. As Mark Lambert, a senior consultant at netlogx, says, “CIOs shouldn’t view ChatGPT as a replacement for humans but as a new and exciting tool that their IT teams can utilize. From troubleshooting IT issues to creating content for the company’s knowledge base, artificial intelligence can help teams operate more efficiently and effectively.”



Would you let ChatGPT control your smart home? — from theverge.com by

While the promise of an inherently competent, eminently intuitive voice assistant — a flawless butler for your home — is very appealing, I fear the reality could be more Space Odyssey than Downton Abbey. But let’s see if I’m proven wrong.


How ChatGPT Is Being Used To Enhance VR Training — from vrscout.com by Kyle Melnick

Excerpt:

The company claims that its VR training program can be used to prepare users for a wide variety of challenging scenarios, whether you’re a recent college graduate preparing for a difficult job interview or a manager simulating a particularly tough performance review. Users can customize their experiences depending on their role and receive real-time feedback based on their interactions with the AI.


From DSC:
Below are some example topics/articles involving healthcare and AI. 


Role of AI in Healthcare — from doctorsexplain.media
The role of Artificial Intelligence (AI) in healthcare is becoming increasingly important as technology advances. AI has the potential to revolutionize the healthcare industry, from diagnosis and treatment to patient care and management. AI can help healthcare providers make more accurate diagnoses, reduce costs, and improve patient outcomes.

60% of patients uncomfortable with AI in healthcare settings, survey finds — from healthcaredive.com by Hailey Mensik

Dive Brief:

  • About six in 10 U.S. adults said they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center.
  • Some 38% of respondents said using AI in healthcare settings would lead to better health outcomes while 33% said it would make them worse, and 27% said it wouldn’t make much of a difference, the survey found.
  • Ultimately, men, younger people and those with higher education levels were the most open to their providers using AI.

The Rise of the Superclinician – How Voice AI Can Improve the Employee Experience in Healthcare — from medcitynews.com by Tomer Garzberg
Voice AI is the new frontier in healthcare. With its constantly evolving landscape, the healthcare […]

Excerpt:

Voice AI can generate up to 30% higher clinician productivity, by automating these healthcare use cases

  • Updating records
  • Provider duress
  • Platform orchestration
  • Shift management
  • Client data handoff
  • Home healthcare
  • Maintenance
  • Equipment ordering
  • Meal preferences
  • Case data queries
  • Patient schedules
  • Symptom logging
  • Treatment room setup
  • Patient condition education
  • Patient support recommendations
  • Medication advice
  • Incident management
  • … and many more

ChatGPT is poised to upend medical information. For better and worse. — from usatoday.com by Karen Weintraub

Excerpt:

But – and it’s a big “but” – the information these digital assistants provide might be more inaccurate and misleading than basic internet searches.

“I see no potential for it in medicine,” said Emily Bender, a linguistics professor at the University of Washington. By their very design, these large-language technologies are inappropriate sources of medical information, she said.

Others argue that large language models could supplement, though not replace, primary care.

“A human in the loop is still very much needed,” said Katie Link, a machine learning engineer at Hugging Face, a company that develops collaborative machine learning tools.

Link, who specializes in health care and biomedicine, thinks chatbots will be useful in medicine someday, but it isn’t yet ready.

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 
© 2024 | Daniel Christian