In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

ChatGPT for Spanish Classrooms — from rdene915.com by Nicole Biscotti, M. Ed.

Excerpt:

ChatGPT is just what the busy Spanish teacher necesita – no wasted time searching for the perfect “lectura” (text). Effective language instruction is coupled with learning about culture and now I’m able to generate texts in seconds AND I can even center them around a Latin American country, cultural point of interest, holiday, grammatical structure, etc.  Differentiation and personalized learning, those lofty teaching ideals that can feel a bit heavy when you mean well but have 35 kids in your room, have become that much easier to attain with ChatGPT.  It’s possible to generate texts about diverse aspects of culture in seconds and make adjustments for interests, length, rigor, etc. (Kuo & Lai, 2006) (Salaberry, 1999; Rost, 2002).

CURATING YOUR CLASSROOM WITH 9 MUST-HAVE TOOLS FOR RESOURCE COLLECTION – EASY EDTECH PODCAST 202 — from classtechtips.com by Monica Burns

Description:

How do you share resources with students? In this episode, we’ll focus on what happens after you find the very best resources to share with students. You’ll also hear about nine digital tools to help educators build a resource collection for students. So whether you have ten great resources on endangered species to share with your fourth graders or a dozen tutorial videos to share with your eleventh graders, this episode is for you!

50+ Useful AI Writing Tools to Know (2023) — from hongkiat.com

Excerpt:

AI writing tools generate content based on the keywords or prompt provided by users. You can then improve upon the output and make it suitable according to your own requirements.

There are different types of AI writing tools and in this post we are featuring some of the best ones. From content generators and editors to translators and typing assistants, there’s a whole gamut of AI-powered writing tools in the list. Take a look and see if one (or more) catches your interest.

How to Use Minecraft as a Teaching Tool — from intelligenthq.com

Excerpt:

Kids today have grown up with Minecraft, so it’s easy to get them enthusiastic about lessons using it. They can build anything they like, and use Minecraft skins to make the characters they create uniquely their own, getting them especially enthusiastic and involved in their lessons.

Teachers who learn how to use Minecraft as a teaching tool have found that it noticeably improves problem solving, creativity, and the ability to work together. It teaches both 21st century skills and timeless lessons.


On a somewhat related note, also see:


 

Introducing: ChatGPT Edu-Mega-Prompts — from drphilippahardman.substack.com by Dr. Philippa Hardman; with thanks to Ray Schroeder out on LinkedIn for this resource
How to combine the power of AI + learning science to improve your efficiency & effectiveness as an educator

From DSC:
Before relaying some excerpts, I want to say that I get the gist of what Dr. Hardman is saying re: quizzes. But I’m surprised to hear she had so many pedagogical concerns with quizzes. I, too, would like to see quizzes used as an instrument of learning and to practice recall — and not just for assessment. But I would give quizzes a higher thumbs up than what she did. I think she was also trying to say that quizzes don’t always identify misconceptions or inaccurate foundational information. 

Excerpts:

The Bad News: Most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content + quiz).

The Good News: Armed with prompts which are carefully crafted to ask the right thing in the right way, educators can use AI like GPT3 to improve the effectiveness of their instructional practices.

As is always the case, ChatGPT is your assistant. If you’re not happy with the result, you can edit and refine it using your expertise, either alone or through further conversation with ChatGPT.

For example, once the first response is generated, you can ask ChatGPT to make the activity more or less complex, to change the scenario and/or suggest more or different resources – the options are endless.

Philippa recommended checking out Rob Lennon’s streams of content. Here’s an example from his Twitter account:


Also relevant/see:

3 trends that may unlock AI's potential for Learning and Development in 2023

3 Trends That May Unlock AI’s Potential for L&D in 2023 — from learningguild.com by Juan Naranjo

Excerpts:

AI-assisted design and development work
This is the trend most likely to have a dramatic evolution this year.

Solutions like large language models, speech generators, content generators, image generators, translation tools, transcription tools, and video generators, among many others, will transform the way IDs create the learning experiences our organizations use. Two examples are:

1. IDs will be doing more curation and less creation:

  • Many IDs will start pulling raw material from content generators (built using natural language processing platforms like Open AI’s GPT-3, Microsoft’s LUIS, IBM’s Watson, Google’s BERT, etc.) to obtain ideas and drafts that they can then clean up and add to the assets they are assembling. As technology advances, the output from these platforms will be more suitable to become final drafts, and the curation and clean-up tasks will be faster and easier.
  • Then, the designer can leverage a solution like DALL-E 2 (or a product developed based on it) to obtain visuals that can (or not) be modified with programs like Illustrator or Photoshop (see image below for Dall-E’s “Cubist interpretation of AI and brain science.”

2. IDs will spend less, and in some cases no time at all, creating learning pathways

AI engines contained in LXPs and other platforms will select the right courses for employees and guide these learners from their current level of knowledge and skill to their goal state with substantially less human intervention.

 


The Creator of ChatGPT Thinks AI Should Be Regulated — from time.com by John Simons

Excerpts:

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?

This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.


Whispers of A.I.’s Modular Future — from newyorker.com by James Somers; via Sam DeBrule

Excerpts:

Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPTand dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.

Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them.

Ever since I’ve had tape to type up—lectures to transcribe, interviews to write down—I’ve dreamed of a program that would do it for me. The transcription process took so long, requiring so many small rewindings, that my hands and back would cramp. As a journalist, knowing what awaited me probably warped my reporting: instead of meeting someone in person with a tape recorder, it often seemed easier just to talk on the phone, typing up the good parts in the moment.

From DSC:
Journalism majors — and even seasoned journalists — should keep an eye on this type of application, as it will save them a significant amount of time and/or money.

Microsoft Teams Premium: Cut costs and add AI-powered productivity — from microsoft.com by Nicole Herskowitz

Excerpt:

Built on the familiar, all-in-one collaborative experience of Microsoft Teams, Teams Premium brings the latest technologies, including Large Language Models powered by OpenAI’s GPT-3.5, to make meetings more intelligent, personalized, and protected—whether it’s one-on-one, large meetings, virtual appointments, or webinars.


 

Also from Lior see:

 

Meet MathGPT: a Chatbot Tutor Built Specific to a Math Textbook — from thejournal.com by Kristal Kuykendall

Excerpt:

Micro-tutoring platform PhotoStudy has unveiled a new chatbot built on OpenAI’s ChatGPT APIs that can teach a complete elementary algebra textbook with “extremely high accuracy,” the company said.

“Textbook publishers and teachers can now transform their textbooks and teaching with a ChatGPT-like assistant that can teach all the material in a textbook, assess student progress, provide personalized help in weaker areas, generate quizzes with support for text, images, audio, and ultimately a student customized avatar for video interaction,” PhotoStudy said in its news release.

Some sample questions the MathGPT tool can answer:

    • “I don’t know how to solve a linear equation…”
    • “I have no idea what’s going on in class but we are doing Chapter 2. Can we start at the top?”
    • “Can you help me understand how to solve this mixture of coins problem?”
    • “I need to practice for my midterm tomorrow, through Chapter 6. Help.”
 

 

Contracts Company Ironclad Taps Into GPT-3 For Instant Document Redlining Based On A Company’s Playbook — from lawnext.com by Robert Ambrogi

Excerpt:

The contract lifecycle management company Ironclad has tapped into the power of OpenAI’s GPT-3 to introduce AI Assist, a beta feature that instantly redlines contracts based on a company’s playbook of approved clauses and language.

The redlines, made using GPT-3’s generative artificial intelligence, appear as tracked changes in Microsoft Word, where a user can then scan the recommended changes and either accept or reject them.


Addendum:


 

Educator considerations for ChatGPT — from platform.openai.com; with thanks to Anna Mills for this resource

Excerpt:

Streamlined and personalized teaching
Some examples of how we’ve seen educators exploring how to teach and learn with tools like ChatGPT:

  • Drafting and brainstorming for lesson plans and other activities
  • Help with design of quiz questions or other exercises
  • Experimenting with custom tutoring tools
  • Customizing materials for different preferences (simplifying language, adjusting to different reading levels, creating tailored activities for different interests)
  • Providing grammatical or structural feedback on portions of writing
  • Use in upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)
  • Critique AI generated text

While several of the above draw on ChatGPT’s potential to be explored as a tool for personalization, there are risks associated with such personalization as well, including student privacy, biased treatment, and development of unhealthy habits. Before students use tools that offer these services without direct supervision, they and their educators should understand the limitations of the tools outlined below.

Also relevant/see:

Excerpt (emphasis DSC):
David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

This resonated with me. Instructional Designer positions are starting to require AI and ML chops. I’m introducing my grad students to AI and ChatGPT this semester. I have an assignment based on it.

(This ain’t your father’s instructional design…)

Robert Gibson


 

The practical guide to using AI to do stuff — from oneusefulthing.substack.com by Ethan Mollick; with thanks to Sam DeBrule for this resource. Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania where he studies entrepreneurship & innovation, as well as how we can better learn and teach.
A resource for students in my classes (and other interested people).

Excerpts:

My classes now require AI (and if I didn’t require AI use, it wouldn’t matter, everyone is using AI anyway). But how can students use AI well? Here is a basic tutorial and guide I am providing my classes. It covers some of the many ways to use AI to be more productive, creative, and successful, using the technology available in early 2023, as well as some of the risks.

Come up with ideas 
Open Source Option: Nothing very good
Best free (for now) option: ChatGPT (registration may require a phone number)
Best option if ChatGPT is down: OpenAI Playground
.


Also relevant/see:

ChatGPT for educators -- a free 17 lesson course

 



On a relevant note:

Gen Z says school is not equipping them with the skills they need to survive in a digital world — from fastcompany.com by Shalene Gupta; with thanks to Robert Gibson for this resource
According to a study from Dell Technologies, Gen Z-ers in 15 different countries feel their government could do better.

Excerpt:

They see an education and skills gap: Forty-four percent said that school only taught them very basic computing skills, while 37% said that school education (for children under age 16) didn’t prepare them with the technology skills they needed for their planned careers. Forty percent consider learning new digital skills essential to future career options.

It’s clear that Gen Z see technology as pivotal for their future prosperity. It is now up to us—leading technology providers, governments, and the public sector—to work together and set them up for success by improving the quality and access to digital learning. Forty-four percent of Gen Z feel educators and businesses should work together to bridge the digital skills gap, and with the speed at which technology continues to evolve, this will require constant collaboration.

Aongus Hegarty, president of international markets at Dell Technologies


 

Revolutionising Criminal Law with AI — from seotraininglondon.org by Danny Richman
This case study outlines how I helped a law firm use Artificial Intelligence (AI) to streamline new client enquiries, resulting in significant savings of time and money.

Excerpt:

However, this process took up a lot of time and resources, meaning that highly qualified, well-paid individuals had to dedicate their time and energy to processing email enquiries instead of working on client cases.

That’s why I developed an app for Stuart Miller built on OpenAI’s GPT-3 technology. This app receives the content of the client’s email and makes the same determination as the human team of lawyers. It then immediately alerts the relevant lawyer to any enquiries flagged as high-priority, high-value cases. The entire process is automated requiring no human interaction.

From DSC:
Hmmm…something to keep on the radar.


Also relevant/see:

Here’s Why Lawyers Are Paying Attention to ChatGPT — from legallydisrupted.com by Zach Abramowitz
AI Will Continue to Be a Talking Point Throughout the Year

Excerpts:

Ready to get disrupted? Me neither, but let’s take the plunge.

ChatGPT is all anyone in legal wants to talk about right now, and for good reason.

Smash cut to yesterday, and this webinar focusing on ChatGPT is sold out and the sheer number of questions from the audience (which ranged from law students to in-house counsel and law firm partners) was more than 10x a normal webinar.

The point is that I’m not in a bubble this time. Everyone in legal is paying attention to ChatGPT, not just the legaltech nerds. This @#$% is going mainstream.


 

ChatGPT can’t be credited as an author, says world’s largest academic publisher — from theverge.com by James Vincent; with thanks to Robert Gibson on LinkedIn for the resource
But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed.

Excerpt:

Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.


On somewhat-related notes:

Uplevel your prompt craft in ChatGPT with the CREATE framework — from edte.ch by Tom Barrett

Excerpt:

The acronym “CREATE” is a helpful guide for crafting high-quality prompts for AI tools. Each letter represents an important step in the process.

The first four CREA are all part of prompt writing, where TE, the final two are a cycle of reviewing and editing your prompts.

Let’s look at each in more detail, with some examples from ChatGPT to help.

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content — from wsj.com by Alexandra Bruell (behind paywall)
CEO Jonah Peretti intends for artificial intelligence to play a larger role in the company this year


 

Some example components of a learning ecosystem [Christian]

A learning ecosystem is composed of people, tools, technologies, content, processes, culture, strategies, and any other resource that helps one learn. Learning ecosystems can be at an individual level as well as at an organizational level.

Some example components:

  • Subject Matter Experts (SMEs) such as faculty, staff, teachers, trainers, parents, coaches, directors, and others
  • Fellow employees
  • L&D/Training professionals
  • Managers
  • Instructional Designers
  • Librarians
  • Consultants
  • Types of learning
    • Active learning
    • Adult learning
    • PreK-12 education
    • Training/corporate learning
    • Vocational learning
    • Experiential learning
    • Competency-based learning
    • Self-directed learning (i.e., heutagogy)
    • Mobile learning
    • Online learning
    • Face-to-face-based learning
    • Hybrid/blended learning
    • Hyflex-based learning
    • Game-based learning
    • XR-based learning (AR, MR, and VR)
    • Informal learning
    • Formal learning
    • Lifelong learning
    • Microlearning
    • Personalized/customized learning
    • Play-based learning
  • Cloud-based learning apps
  • Coaching & mentoring
  • Peer feedback
  • Job aids/performance tools and other on-demand content
  • Websites
  • Conferences
  • Professional development
  • Professional organizations
  • Social networking
  • Social media – Twitter, LinkedIn, Facebook/Meta, other
  • Communities of practice
  • Artificial Intelligence (AI) — including ChatGPT, learning agents, learner profiles, 
  • LMS/CMS/Learning Experience Platforms
  • Tutorials
  • Videos — including on YouTube, Vimeo, other
  • Job-aids
  • E-learning-based resources
  • Books, digital textbooks, journals, and manuals
  • Enterprise social networks/tools
  • RSS feeds and blogging
  • Podcasts/vodcasts
  • Videoconferencing/audio-conferencing/virtual meetings
  • Capturing and sharing content
  • Tagging/rating/curating content
  • Decision support tools
  • Getting feedback
  • Webinars
  • In-person workshops
  • Discussion boards/forums
  • Chat/IM
  • VOIP
  • Online-based resources (periodicals, journals, magazines, newspapers, and others)
  • Learning spaces
  • Learning hubs
  • Learning preferences
  • Learning theories
  • Microschools
  • MOOCs
  • Open courseware
  • Portals
  • Wikis
  • Wikipedia
  • Slideshare
  • TED talks
  • …and many more components.

These people, tools, technologies, etc. are constantly morphing — as well as coming and going in and out of our lives.

 

 

Top edtech trends in 2023 and the ASU example — from news.asu.edu

Excerpt:

In spite of our tendency to break things down into tidy time frames, like a new year or academic semester, change constantly turns over the status quo. Especially in the world of technology, where disruptive innovation may evolve rapidly from the fringe to the mainstream.

“At ASU’s Enterprise Technology, we work in spaces where technology is not just revolutionizing higher education, but the world at large,” said Lev Gonick, chief information officer at Arizona State University. “We strive to be proactive, not reactive, to new paradigms changing the ways in which we work, learn and thrive.”

As referenced by the above article:

Thus, the top higher education technology trends to watch out for in 2023 include Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), Digital Twins, the Metaverse (including digital avatars and NFT art for use in the Metaverse and other Web3-based virtual environments), Internet of Things (IoT), Blockchain, Cloud, Gamification, and Chatbots. These technologies will support the expansion of the Digital Transformation of higher education going forward.

Also relevant/see:

 

 

9 ways ChatGPT saves me hours of work every day, and why you’ll never outcompete those who use AI effectively. — from linkedin.com by Santiago Valdarrama

Excerpts:

A list for those who write code:

  1. Explaining code…
  2. Improve existing code…
  3. Rewriting code using the correct style…
  4. Rewriting code using idiomatic constructs…
  5. Simplifying code…
  6. Writing test cases…
  7. Exploring alternatives…
  8. Writing documentation…
  9. Tracking down bugs…

Also relevant/see:

A Chat With Dead Legends & 3 Forecasts: The Return of Socratic Method, Assertive Falsehoods, & What’s Investable? — from implications.com by Scott Belsky
A rare “Cambrian Moment” is upon us, and the implications are both mind blowing and equally concerning, let’s explore the impact of a few forecasts in particular.

Excerpts:

Three forecasts are becoming clear…

  • Education will be reimagined by AI tools.
  • AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
  • The defensibility of these AI capabilities as stand-alone companies will rely on data moats, privacy preferences for consumers and enterprises, developer ecosystems, and GTM advantages. (still brewing, but let’s discuss)

As I suggested in Edition 1, ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?

  • The return of the Socratic method, at scale and on-demand…
  • The art and science of prompt engineering…
  • The bar for teaching will rise, as traditional research for paper-writing and memorization become antiquated ways of building knowledge.

 
© 2024 | Daniel Christian