Teaching: What You Can Learn From Students About ChatGPT — from chronicle.com by Beth McMurtrie

Excerpts (emphasis DSC):

Like a lot of you, I have been wondering how students are reacting to the rapid launch of generative AI tools. And I wanted to point you to creative ways in which professors and teaching experts have helped involve them in research and policymaking.

At Kalamazoo College, Autumn Hostetter, a psychology professor, and six of her students surveyed faculty members and students to determine whether they could detect an AI-written essay, and what they thought of the ethics of using various AI tools in writing. You can read their research paper here.

Next, participants were asked about a range of scenarios, such as using Grammarly, using AI to make an outline for a paper, using AI to write a section of a paper, looking up a concept on Google and copying it directly into a paper, and using AI to write an entire paper. As expected, commonly used tools like Grammarly were considered the most ethical, while writing a paper entirely with AI was considered the least. But researchers found variation in how people approached the in-between scenarios. Perhaps most interesting: Students and faculty members shared very similar views with each scenario.

 


Also relevant/see:

This Was Written By a Human: A Real Educator’s Thoughts on Teaching in the Age of ChatGPT — from er.educause.edu educause.org by Jered Borup
The well-founded concerns surrounding ChatGPT shouldn’t distract us from considering how it might be useful.


 

 

From DSC:
After seeing this…

…I wondered:

  • Could GPT-4 create the “Choir Practice” app mentioned below?
    (Choir Practice was an idea for an app for people who want to rehearse their parts at home)
  • Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?

This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”

Choir Practice -- an app for people who want to rehearse at home

And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days.  🙂

 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 
 

Evolving Zoom IQ, our smart companion, with new features and a collaboration with OpenAI — from blog.zoom.us

Excerpt:

Today we’re announcing that we’re evolving the capabilities of Zoom IQ to become a smart companion that empowers collaboration and unlocks people’s potential by summarizing chat threads, organizing ideas, drafting content for chats, emails, and whiteboard sessions, creating meeting agendas, and more.

 


Also from Julie Sobowale, see:

  • Law’s AI revolution is here — from nationalmagazine.ca
    At least this much we know. Firms need to develop a strategy around language models.

Also re: legaltech, see:

  • Pioneers and Pathfinders: Richard Susskind — from seyfarth.com by J. Stephen Poor
    In our conversation, Richard discusses the ways we should all be thinking about legal innovation, the challenges of training lawyers for the future, and the qualifications of those likely to develop breakthrough technologies in law, as well as his own journey and how he became interested in AI as an undergraduate student.

Also re: legaltech, see:

There is an elephant in the room that is rarely discussed. Who owns the IP of AI-generated content?

 

ChatGPT and AI Applications for In-house Lawyers — from docket.acc.com by Spiwe L. Jefferson

Excerpt:

The explosive emergence of ChatGPT as a consumer tool has catapulted Artificial Intelligence (AI) and its subfield, natural language processing (NLP), to the technology stage. As AI and NLP continue to evolve, the use of AI-powered tools, such as the Generative Pre-trained Transformer (GPT), in the legal industry has become increasingly prevalent. Many lawyers are experimenting with AI and grappling with its applications to streamline practices and improve efficiency.

While GPT and other AI-powered tools can potentially revolutionize aspects of the legal profession, it is important to consider the current limitations and potential pitfalls in implementing these technologies.

017 | Post-Event Learnings w/ AI Prompts |Brainyacts #17 — from thebrainyacts.beehiiv.com

Excerpt:

Earlier this week some of you were at Legalweek in New York. Others of you joined Terri Mottershead and the Centre for Legal Innovation to talk about what consultants think about ChatGPT/Generative AI.

Far too many of us attend these events and never take the time to invest in ourselves and our organizations by capturing our learnings and insights.

In fact, I will go a step further.

If your organization paid for you to go, there is an obligation to transfer your personal experience into one that benefits the organization. Sort of a return on investment for the $ and time away from the office.

 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Law has a magic wand now — from jordanfurlong.substack.com by Jordan Furlong
Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.

Excerpts:

ChatGPT4 can also do things that only lawyers (used to be able to) do. It can look up and summarize a court decisionanalyze and apply sections of copyright law, and generate a statement of claim for breach of contract.

What happens when you introduce a magic wand into the legal market? On the buyer side, you reduce by a staggering degree the volume of tasks that you need to pay lawyers (whether in-house or outside counsel) to perform. It won’t happen overnight: Developing, testing, revising, approving, and installing these sorts of systems in corporations will take time. But once that’s done, the beauty of LLMs like ChatGPT4 is that they are not expert systems. Anyone can use them. Anyone will.

But I can’t shake the feeling that someday, we’ll divide the history of legal services into “Before GPT4” and “After GPT4.” I think it’s that big.


From DSC:
Jordan mentions: “Some people think Large Language Models will transform the practice of law. I think it’s bigger than that.”

I agree with Jordan. It most assuredly IS bigger than that. AI will profoundly impact many industries/disciplines. The legal sector is but one of them. Education is another. People’s expectations are now changing — and the “ramification wheels” are now in motion.

I take the position that many others have as well (at least as of this point in time) that take the position that AI will supplement humans’ capabilities and activities. But those who know AI-driven apps will outcompete those who don’t know about such apps. 

 

We Can’t Keep ChatGPT Out of the Classroom, so Let’s Address the ‘Why’ Behind Our Fears — from edsurge.com by Alice Domínguez

Excerpt:

ChatGPT offers us an opportunity to address our fears, release our fixation on preventing cheating and focus our attention on more worthy priorities: providing students with compelling reasons to write, inviting them to wrestle with important questions and crafting a piece of writing that cannot be mistaken for a robot’s work.

 

The Ultimate Compilation of 101 GPT-4 L&D Prompts — from innovationlounge.notion.site

Examples:

  1. Design a personalized learning plan for a new employee joining a software development team, including onboarding, skill assessment, and ongoing development.
  2. Describe a strategy for implementing a mentorship program within an organization to improve employee development and retention.
  3. Suggest a framework for evaluating the effectiveness of a company’s existing learning and development programs.
  4. Propose a gamification strategy for engaging employees in a company-wide training program on diversity and inclusion.
  5. Identify potential barriers to effective remote learning and provide recommendations for overcoming them in a virtual training environment.
  6. Develop a plan for a “lunch-and-learn” series to encourage cross-functional collaboration and skill-sharing among employees.
  7. Recommend a variety of cost-effective resources (books, online courses, webinars, etc.) for training employees in project management skills.
  8. Design a three-month training program for a sales team to improve their negotiation and closing skills, including specific activities and milestones.
  9. Outline a strategy for integrating microlearning techniques into an organization’s existing training approach to boost knowledge retention and engagement.
  10. Suggest an approach for identifying and addressing skills gaps within a team or department, including assessment tools and targeted training resources.
 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 

How AI will revolutionize the practice of law — from brookings.edu by John Villasenor

Artificial intelligence (AI) is poised to fundamentally reshape the practice of law. 

Excerpt:

BROADENING ACCESS TO LEGAL SERVICES
AI also has the potential to dramatically broaden access to legal services, which are prohibitively expensive for many individuals and small businesses. As the Center for American Progress has written, “[p]romoting equal, meaningful access to legal representation in the U.S. justice system is critical to ending poverty, combating discrimination, and creating opportunity.”

AI will make it much less costly to initiate and pursue litigation. For instance, it is now possible with one click to automatically generate a 1000-word lawsuit against robocallers. More generally, drafting a well-written complaint will require more than a single click, but in some scenarios, not much more. These changes will make it much easier for law firms to expand services to lower-income clients.

 

Begun, the AI lawsuits have — from bloomberg.com by Brad Stone

Excerpt:

A few intellectual property lawsuits have already been lobbed at the makers of these AI services. Getty sued Stability AI for building its image database by scraping millions of online images, many protected by copyright. In a similar suit, a San Francisco-based class action firm representing three artists sued Stability AI, DeviantArt and Midjourney for training their model with billions of copyrighted images. News publishers have also criticized OpenAI for using their articles to train its AI tools.

But the complaints focus on the creation and training of databases that feed these new AI engines — the inputs. It remains to be seen how rights holders will interpret the output — Yoda’s ramblings, for example — which have been coopted by Character.AI and its ilk without permission from Walt Disney Co.-owned Lucasfilm and other rights holders.

 

 
© 2025 | Daniel Christian