Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly

Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.

It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.

The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.

From that large paper:

Key questions for the ethical and societal analysis of advanced AI assistants include:

  1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
  2. What capabilities would an advanced AI assistant have? How capable could these assistants be?
  3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
  4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
  5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
  6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
  7. What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
  8. How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
  9. Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
 

Addressing equity and ethics in artificial intelligence — from apa.org by Zara Abrams
Algorithms and humans both contribute to bias in AI, but AI may also hold the power to correct or reverse inequities among humans

“The conversation about AI bias is broadening,” said psychologist Tara Behrend, PhD, a professor at Michigan State University’s School of Human Resources and Labor Relations who studies human-technology interaction and spoke at CES about AI and privacy. “Agencies and various academic stakeholders are really taking the role of psychology seriously.”


NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications — from natlawreview.com by James G. Gatto

The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.

For those of you who want the “Cliff Notes” version of this report, here is a table that summarizes by topic the various rules mentioned and a concise summary of the associated guidance.

The Report includes four primary recommendations:


 

 

 

The University Student’s Guide To Ethical AI Use  — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource

This comprehensive guide offers:

  • Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
  • An overview of popular AI tools used in universities and its limitations as a study tool
  • Tips on how to ethically use AI and how to maximize its capabilities for students
  • Current existing punishment and penalties for cheating using AI
  • A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use

Some of the key facts you might find interesting are:

  • The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
  • 68% of students say using AI has impacted their academic performance positively.
  • Educators using AI tools say the technology helps speed up their grading process by as much as 75%.
 

AI-related tools and tips dominate ’60 in 60′ Techshow session — from abajournal.com by Danielle Braff

Four days of seminars, lectures and demonstrations at the 39th annual ABA Techshow boiled down to Saturday morning’s grand finale, where panelists rounded up their favorite tech tips and apps. The underlying theme: artificial intelligence.

“It’s an amazing tool, but it’s kind of scary, so watch out,” said Cynthia Thomas, the Techshow co-chair, and owner of PLMC & Associates, talking about the new tool from OpenAI, Sora, which takes text and turns it into video.

Other panelists during the traditional Techshow closer, “60 sites, 60 tips and gadgets and gizmos,” highlighted a wide of AI-enabled or augmented tools to help users perform a large range of tasks, including quickly sift through user reviews for products, generate content, or keep up-to-date on the latest AI tools. For those looking for a non-AI tips and tools, they also suggested several devices, websites, tips and apps that have helped them with their practice and with life in general.


ABA Techshow 2024: Ethics in the Age of Legal Technology — from bnnbreaking.com by Rafia Tasleem

ABA Techshow 2024 stressed the importance of ethics in legal technology adoption. Ethics lawyer Stuart I. Teicher warned of the potential data breaches and urged attorneys to be proactive in understanding and supervising new tools. Education and oversight are key to maintaining data protection and integrity.


Startup Alley Competition Proves It Continues To Be All About AI — from abovethelaw.com by Joe Patrice

Though it might be more accurate to call TECHSHOW an industry showcase because with each passing year it seems that more and more of the show involves other tech companies looking to scoop up enterprising new companies. A tone that’s set by the conference’s opening event: the annual Startup Alley pitch competition.

This year, 15 companies presented. If you were taking a shot every time someone mentioned “AI” then my condolences because you are now dead. If you included “machine learning” or “large language model” then you’ve died, come back as a zombie, and been killed again.


Here Are the Winners of ABA Techshow’s 8th Annual Startup Alley Pitch Competition — from lawnext.com by Bob Ambrogi

Here were the companies that won the top three spots:

  1. AltFee, a product that helps law firms replace the billable hour with fixed-fee pricing.
  2. Skribe.ai, an alternative to traditional court reporting that promises “a better way to take testimony.”
  3. Paxton AI, an AI legal assistant.

Class action firms ask US federal courts to encourage virtual testimony — from reuters.com by Nate Raymond

Summary:

  • Lawyers at Hagens Berman are leading charge to change rules
  • Proposal asks judiciary to ‘effectuate a long overdue modernization’ of rules

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

OpenAI announces leadership transition — from openai.com
Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

Excerpt (emphasis DSC):

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

From DSC:
I’m not here to pass judgment, but all of us on planet Earth should be at least concerned with this disturbing news.

AI is one of the most powerful set of emerging technologies on the planet right now. OpenAI is arguably the most powerful vendor/innovator/influencer/leader in that space. And Sam Altman is was the face of OpenAI — and arguably for AI itself. So this is a big deal.

What concerns me is what is NOT being relayed in this posting:

  • What was being hidden from OpenAI’s Board?
  • What else doesn’t the public know? 
  • Why is Greg Brockman stepping down as Chairman of the Board?

To whom much is given, much is expected.


Also related/see:

OpenAI CEO Sam Altman ousted, shocking AI world — from washingtonpost.com by Gerrit De Vynck and Nitasha Tiku
The artificial intelligence company’s directors said he was not ‘consistently candid in his communications with the board’

Altman’s sudden departure sent shock waves through the technology industry and the halls of government, where he had become a familiar presence in debates over the regulation of AI. His rise and apparent fall from tech’s top rung is one of the fastest in Silicon Valley history. In less than a year, he went from being Bay Area famous as a failed start-up founder who reinvented himself as a popular investor in small companies to becoming one of the most influential business leaders in the world. Journalists, politicians, tech investors and Fortune 500 CEOs alike had been clamoring for his attention.

OpenAI’s Board Pushes Out Sam Altman, Its High-Profile C.E.O. — from nytimes.com by Cade Metz

Sam Altman, the high-profile chief executive of OpenAI, who became the face of the tech industry’s artificial intelligence boom, was pushed out of the company by its board of directors, OpenAI said in a blog post on Friday afternoon.


From DSC:
Updates — I just saw these items

.
Sam Altman fired as CEO of OpenAI — from theverge.com by Jay Peters
In a sudden move, Altman is leaving after the company’s board determined that he ‘was not consistently candid in his communications.’ President and co-founder Greg Brockman has also quit.



 

Why Kindness at Work Pays Off — from hbr.org by Andrew Swinand; via Roberto Ferraro

Summary:
Whether you’re just entering the workforce, starting a new job, or transitioning into people management, kindness can be a valuable attribute that speaks volumes about your character, commitment, and long-term value. Here are a few simple routines you can integrate into your everyday work life that will spread kindness and help create a culture of kindness at your organization.

  • Practice radical self-care. The best way to be a valuable, thoughtful team member is to be disciplined about your own wellness — your physical, emotional, and mental well-being.
  • Do your job. Start with the basics by showing up on time and doing your job to the best of your ability. This is where your self-care practice comes into play — you can’t do your best work without taking care of yourself first.
  • Reach out to others with intention. Make plans to meet virtually or, even better, in person with your colleagues. Ask about their pets, their recent move, or their family. Most importantly, practice active listening.
  • Recognize and acknowledge people. Authentic, thoughtful interactions show that you’re thinking about the other person and reflecting on their unique attributes and value, which can cement social connections.
  • Be conscientious with your feedback. Being kind means offering feedback for the betterment of the person receiving it and the overall success of your company.

“When anxiety is high and morale is low, kindness isn’t a luxury — it’s a necessity. With mass layoffs, economic uncertainty, and geopolitical tensions, kindness is needed now more than ever, especially at work.”

 

41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids — from washingtonpost.com by Cristiano Lima and Naomi Nix
The action marks the most sprawling state challenge to date over social media’s impact on children’s mental health

Forty-one states and the District of Columbia are suing Meta, alleging that the tech giant harms children by building addictive features into Instagram and Facebook. Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health.

 

How can a young person stay on the path of purity?
    By living according to your word.
10 I seek you with all my heart;
    do not let me stray from your commands.
11 I have hidden your word in my heart
    that I might not sin against you.
12 Praise be to you, Lord;
    teach me your decrees.

Proverbs 19:20-21

20 Listen to advice and accept discipline,
and at the end you will be counted among the wise.
21 Many are the plans in a person’s heart,
but it is the Lord’s purpose that prevails.

Jeremiah 18:7-10

If at any time I announce that a nation or kingdom is to be uprooted, torn down and destroyed, and if that nation I warned repents of its evil, then I will relent and not inflict on it the disaster I had planned. And if at another time I announce that a nation or kingdom is to be built up and planted, 10 and if it does evil in my sight and does not obey me, then I will reconsider the good I had intended to do for it.

You have laid down precepts
    that are to be fully obeyed.
Oh, that my ways were steadfast
    in obeying your decrees!
Then I would not be put to shame
    when I consider all your commands.

 

Comparing Online and AI-Assisted Learning: A Student’s View — from educationnext.org by Daphne Goldstein
An 8th grader reviews traditional Khan Academy and its AI-powered tutor, Khanmigo

Hi everyone, I’m Daphne, a 13-year-old going into 8th grade.

I’m writing to compare “regular” Khan Academy (no AI) to Khanmigo (powered by GPT4), using three of my own made-up criteria.

They are: efficiency, effectiveness, and enjoyability. Efficiency is how fast I am able to cover a math topic and get basic understanding. Effectiveness is my quality of understanding—the difference between basic and advanced understanding. And the final one—most important to kids and maybe least important to adults who make kids learn math—is enjoyability.


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


First Impressions with GPT-4V(ision) — from blog.roboflow.com by James Gallagher; via Donald Clark on LinkedIn

On September 25th, 2023, OpenAI announced the rollout of two new features that extend how people can interact with its recent and most advanced model, GPT-4: the ability to ask questions about images and to use speech as an input to a query.

This functionality marks GPT-4’s move into being a multimodal model. This means that the model can accept multiple “modalities” of input – text and images – and return results based on those inputs. Bing Chat, developed by Microsoft in partnership with OpenAI, and Google’s Bard model both support images as input, too. Read our comparison post to see how Bard and Bing perform with image inputs.

In this guide, we are going to share our first impressions with the GPT-4V image input feature.


 

Don’t Be Fooled: How You Can Master Media Literacy in the Digital Age — from youtube.com by Professor Sue Ellen Christian

During this special keynote presentation, Western Michigan University (WMU) professor Sue Ellen Christian speaks about the importance of media literacy for all ages and how we can help educate our friends and families about media literacy principles. Hosted by the Grand Rapids Public Library and GRTV, a program of the Grand Rapids Community Media Center. Special thanks to the Grand Rapids Public Library Foundation for their support of this program.

Excerpts:

Media Literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. Center for Media Literacy

5 things to do when confronted with concerns about content.


Also relevant/see:

Kalamazoo Valley Museum’s newest exhibit teaches community about media literacy — from mlive.com by Gabi Broekema

 

2 Chronicles 15:2

He went out to meet Asa and said to him, “Listen to me, Asa and all Judah and Benjamin. The LORD is with you when you are with him. If you seek him, he will be found by you, but if you forsake him, he will forsake you.

1 Corinthians 15:3-8

3 For what I received I passed on to you as of first importance[a]: that Christ died for our sins according to the Scriptures, 4 that he was buried, that he was raised on the third day according to the Scriptures, 5 and that he appeared to Cephas, and then to the Twelve. 6 After that, he appeared to more than five hundred of the brothers and sisters at the same time, most of whom are still living, though some have fallen asleep. 7 Then he appeared to James, then to all the apostles, 8 and last of all he appeared to me also, as to one abnormally born.

John 6:29

Jesus answered, “The work of God is this: to believe in the one he has sent.”

Psalms 103:1-5

Praise the LORD, my soul; all my inmost being, praise his holy name. Praise the LORD, my soul, and forget not all his benefits— who forgives all your sins and heals all your diseases, who redeems your life from the pit and crowns you with love and compassion, who satisfies your desires with good things so that your youth is renewed like the eagle’s.

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

AI for Education Webinars — from youtube.com by Tom Barrett and others

AI for education -- a webinar series by Tom Barrett and company


Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A simple, three-step guide on how to design assessments in a post-AI world

Excerpt:

Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.

They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.

Step 1 -- Write Inquiry-Based Objectives

.


Massive Disruption Now: What AI Means for Students, Educators, Administrators and Accreditation Boards
— from stefanbauschard.substack.com by Stefan Bauschard; via Will Richardson on LinkedIn
The choices many colleges and universities make regarding AI over the next 9 months will determine if they survive. The same may be true for schools.

Excerpts:

Just for a minute, consider how education would change if the following were true

  • AIs “hallucinated” less than humans
  • AIs could write in our own voices
  • AIs could accurately do math
  • AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
  • AIs could teach anything any student wanted or need to know any time of day or night
  • AIs could do this at a fraction of the cost of a human teacher or professor

Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920.  If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820.  AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”


On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander
A new practice with pedagogical possibilities

Excerpt:

Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?

How might this play out in a college or university class?

Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:


Generative Textbooks — from opencontent.org by David Wiley

Excerpt (emphasis DSC):

I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model. These prompts could help learners ask for things like:

  • overviews and in-depth explanations of specific topics in a specific sequence,
  • examples that the learner finds personally relevant and interesting,
  • interactive practice – including open-ended exercises – with immediate, corrective feedback,
  • the structure of the relationships between ideas and concepts,
  • etc.

Also relevant/see:


.


Generating The Future of Education with AI — from aixeducation.com

AI in Education -- An online-based conference taking place on August 5-6, 2023

Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.


How Teachers Are Using ChatGPT in Class — from edweek.org by Larry Ferlazzo

Excerpt:

The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.


Japan emphasizes students’ comprehension of AI in new school guidelines — from japantimes.co.jp by Karin Kaneko; via The Rundown

Excerpt:

The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.

Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.


AI’s Teachable Moment: How ChatGPT Is Transforming the Classroom — from cnet.com by Mark Serrels
Teachers and students are already harnessing the power of AI, with an eye toward the future.

Excerpt:

Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.

In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.


The AI Dictionary for Educators — from blog.profjim.com

To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.


7 Strategies to Prepare Educators to Teach With AI — from edweek.org by Lauraine Langreo; NOTE: Behind paywall


 

Romans 12:10
Be devoted to one another in love. Honor one another above yourselves.

Proverbs 12:22
“The Lord detests lying lips, but he delights in people who are trustworthy.”

Psalms 65:3
When we were overwhelmed by sins, you forgave our transgressions.

Proverbs 13:10
Where there is strife, there is pride, but wisdom is found in those who take advice.

 
© 2024 | Daniel Christian