YouTube tests AI-generated quizzes on educational videos — from techcrunch.com by Lauren Forristal

YouTube tests AI-generated quizzes on educational videos

YouTube is experimenting with AI-generated quizzes on its mobile app for iOS and Android devices, which are designed to help viewers learn more about a subject featured in an educational video. The feature will also help the video-sharing platform get a better understanding of how well each video covers a certain topic.


Incorporating AI in Teaching: Practical Examples for Busy Instructors — from danielstanford.substack.com by Daniel Stanford; with thanks to Derek Bruff on LinkedIn for the resource

Since January 2023, I’ve talked with hundreds of instructors at dozens of institutions about how they might incorporate AI into their teaching. Through these conversations, I’ve noticed a few common issues:

  • Faculty and staff are overwhelmed and burned out. Even those on the cutting edge often feel they’re behind the curve.
  • It’s hard to know where to begin.
  • It can be difficult to find practical examples of AI use that are applicable across a variety of disciplines.

To help address these challenges, I’ve been working on a list of AI-infused learning activities that encourage experimentation in (relatively) small, manageable ways.


September 2023: The Secret Intelligent Beings on Campus — from stefanbauschard.substack.com by Stefan Bauschard
Many of your students this fall will be enhanced by artificial intelligence, even if they don’t look like actual cyborgs. Do you want all of them to be enhanced, or just the highest SES students?


How to report better on artificial intelligence — from cjr.org (Columbia Journalism Review) by Syash Kapoor, Hilke Schellmann, and Ari Sen

In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society.

Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way.

We—an investigative reporter, a data journalist, and a computer scientist—have firsthand experience investigating AI. We’ve seen the tremendous potential these tools can have—but also their tremendous risks.

As their adoption grows, we believe that, soon enough, many reporters will encounter AI tools on their beat, so we wanted to put together a short guide to what we have learned.


AI

.
DSC:
Something I created via Adobe Firefly (Beta version)

 


The 5 reasons L&D is going to embrace ChatGPT — from chieflearningoffice.com by Josh Bersin

Does this mean it will do away with the L&D job? Not at all — these tools give you superhuman powers to find content faster, put it in front of employees in a more useful way and more creatively craft character simulations, assessments, learning in the flow of work and more.

And it’s about time. We really haven’t had a massive innovation in L&D since the early days of the learning experience platform market, so we may be entering the most exciting era in a long time.

Let me give you the five most significant use cases I see. And more will come.


AI and Tech with Scenarios: ID Links 7/11/23 — from christytuckerlearning.com by Christy Tucker

As I read online, I bookmark resources I find interesting and useful. I share these links periodically here on my blog. This post includes links on using tech with scenarios: AI, xAPI, and VR. I’ll also share some other AI tools and links on usability, resume tips for teachers, visual language, and a scenario sample.



It’s only a matter of time before A.I. chatbots are teaching in primary schools — from cnbc.com by Mikaela Cohen

Key Points

  • Microsoft co-founder Bill Gates saying generative AI chatbots can teach kids to read in 18 months rather than years.
  • Artificial intelligence is beginning to prove that it can accelerate the impact teachers have on students and help solve a stubborn teacher shortage.
  • Chatbots backed by large language models can help students, from primary education to certification programs, self-guide through voluminous materials and tailor their education to specific learning styles [preferences].

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

The Future Of Education: Embracing AI For Student Success — from forbes.com by Dr. Michael Horowitz

Unfortunately, too often attention is focused on the problems of AI—that it allows students to cheat and can undermine the value of what teachers bring to the learning equation. This viewpoint ignores the immense possibilities that AI can bring to education and across every industry.

The fact is that students have already embraced this new technology, which is neither a new story nor a surprising one in education. Leaders should accept this and understand that people, not robots, must ultimately create the path forward. It is only by deploying resources, training and policies at every level of our institutions that we can begin to realize the vast potential of what AI can offer.


AI Tools in Education: Doing Less While Learning More — from campustechnology.com by Mary Grush
A Q&A with Mark Frydenberg


Why Students & Teachers Should Get Excited about ChatGPT — from ivypanda.com with thanks to Ruth Kinloch for this resource

Table of Contents for the article at IvyPanda.com entitled Why Students & Teachers Should Get Excited about ChatGPT

Excerpt re: Uses of ChatGPT for Teachers

  • Diverse assignments.
  • Individualized approach.
  • Interesting classes.
  • Debates.
  • Critical thinking.
  • Grammar and vocabulary.
  • Homework review.

SAIL: State of Research: AI & Education — from buttondown.email by George Siemens
Information re: current AI and Learning Labs, education updates, and technology


Why ethical AI requires a future-ready and inclusive education system — from weforum.org


A specter is haunting higher education — from aiandacademia.substack.com by Bryan Alexander
Fall semester after the generative AI revolution

In this post I’d like to explore that apocalyptic model. For reasons of space, I’ll leave off analyzing student cheating motivations or questioning the entire edifice of grade-based assessment. I’ll save potential solutions for another post.

Let’s dive into the practical aspects of teaching to see why Mollick and Bogost foresee such a dire semester ahead.


Items re: Code Interpreter

Code Interpreter continues OpenAI’s long tradition of giving terrible names to things, because it might be most useful for those who do not code at all. It essentially allows the most advanced AI available, GPT-4, to upload and download information, and to write and execute programs for you in a persistent workspace. That allows the AI to do all sorts of things it couldn’t do before, and be useful in ways that were impossible with ChatGPT.

.


Legal items


MISC items


 

Coursera’s Global Skills Report for 2023 — from coursera.org
Benchmark talent and transform your workforce with skill development and career readiness insights drawn from 124M+ learners.

Excerpt:

Uncover global skill trends
See how millions of registered learners in 100 countries are strengthening critical business, technology, and data science skills.

 

The Cambrian Explosion of AI Edtech Is Here — from edtechinsiders.substack.com by Alex Sarlin, Sarah Morin, and Ben Kornell

Excerpt:

Our AI in Edtech Takeaways

After chronicling 160+ AI tools (which is surely only a small fraction of the total), we’re seeing a few clear patterns among the tools that have come out so far- here are 10 categories that are jumping out!

  • Virtual Teaching Assistants:
  • Virtual Tutors:
  • AI-Powered Study Tools:  
  • Educational Content Creation:
  • Educational Search:
  • Auto-generated Learning Paths: 
  • AI-Powered Research:
  • Speak to Characters:  
  • Grammar and Writing: 
  • AI Cheating Detection:

 


Ready or not, AI is here — from the chronicle.com’s The Edge, by Goldie Blumenstyk

Excerpt:

“I don’t usually get worked up about announcements but I see promise in JFF’s plans for a new Center for Artificial Intelligence & the Future of Work, in no small part because the organization bridges higher ed, K-12 education, employers, and policymakers.”

Goldie Blumenstyk

Goldie’s article links to:

Jobs for the Future Launches New Center for Artificial Intelligence & the Future of Work — from archive.jff.org
Center launches as JFF releases preliminary survey data which finds a majority of workers feel they need new skills and training to prepare for AI’s future impact.

Excerpt:

BOSTON June 14, 2023 —Jobs for the Future (JFF), a national nonprofit that drives transformation in the U.S. education and workforce systems, today announced the launch of its new Center for Artificial Intelligence &the Future of Work. This center will play an integral role in JFF’s mission and newly announced 10-year North Star goal to help 75 million people facing systemic barriers to advancement work in quality jobs. As AI’s explosive growth reshapes every aspect of how we learn, work, and live, this new center will serve as a nexus of collaboration among stakeholders from every part of the education-to-career ecosystem to explore the most promising opportunities—and profound challenges—of AI’s potential to advance an accessible and equitable future of learning and work.

 


OpenAI Considers ‘App Store’ For ChatGPT — from searchenginejournal.com by; with thanks to Barsee at AI Valley for this resource
OpenAI explores launching an ‘app store’ for AI models, potentially challenging current partners and expanding customer reach.

Highlights:

  • OpenAI considers launching an ‘app store’ for customized AI chatbots.
  • This move could create competition with current partners and extend OpenAI’s customer reach.
  • Early interest from companies like Aquant and Khan Academy shows potential, but product development and market positioning challenges remain.

The Rise of AI: New Rules for Super T Professionals and Next Steps for EdLeaders — from gettingsmart.com by Tom Vander Ark

Key Points

  • The rise of artificial intelligence, especially generative AI, boosts productivity in content creation–text, code, images and increasingly video.
  • Here are six preliminary conclusions about the nature of work and learning.

Wonder Tools: AI to try — from wondertools.substack.com by Jeremy Caplan
9 playful little ways to explore AI

Excerpt:

  1. Create a personalized children’s story ? Schrodi
    Collaborate with AI on a free customized, illustrated story for someone special. Give your story’s hero a name, pick a genre (e.g. comedy, thriller), choose an illustration style (e.g. watercolor, 3d animation) and provide a prompt to shape a simple story. You can even suggest a moral. After a minute, download a full-color PDF to share. Or print it and read your new mini picture book aloud.
  2. Generate a quiz ? | Piggy
    Put in a link, a topic, or some text and you’ll get a quiz you can share, featuring multiple-choice or true-false questions. Example: try this quick entrepreneurship quiz Piggy generated for me.

 


3 Questions for Coursera About Generative AI in Education — from insidehighered.com by Joshua Kim
How this tech will change the learning experience, course creation and more.

Excerpt (emphasis DSC):

Q: How will generative AI impact teaching and learning in the near and long term?

Baker Stein: One-on-one tutoring at scale is finally being unlocked for learners around the world. This type of quality education is no longer only available to students with the means to hire a private tutor. I’m also particularly excited to see how educators make use of generative AI tools to create courses much faster and likely at a higher quality with increased personalization for each student or even by experimenting with new technologies like extended reality. Professors will be able to put their time toward high-impact activities like mentoring, researching and office hours instead of tedious course-creation tasks. This helps open up the capacity for educators to iterate on their courses faster to keep pace with industry and global changes that may impact their field of study.

Another important use case is how generative AI can serve as a great equalizer for students when it comes to writing, especially second language learners.

 
 

 

From DSC:
As Rob Toews points out in his recent article out at Forbes.com, we had better hope that the Taiwan Semiconductor Manufacturing Company (TSMC) builds out the capacity to make chips in various countries. Why? Because:

The following statement is utterly ludicrous. It is also true. The world’s most important advanced technology is nearly all produced in a single facility.

What’s more, that facility is located in one of the most geopolitically fraught areas on earth—an area in which many analysts believe that war is inevitable within the decade.

The future of artificial intelligence hangs in the balance.

The Taiwan Semiconductor Manufacturing Company (TSMC) makes ***all of the world’s advanced AI chips.*** Most importantly, this means Nvidia’s GPUs; it also includes the AI chips from Google, AMD, Amazon, Microsoft, Cerebras, SambaNova, Untether and every other credible competitor.

— from The Geopolitics Of AI Chips Will Define The Future Of AI
out at Forbes.com by Rob Toews

Little surprise, then, that Time Magazine described TSMC
as “the world’s most important company that you’ve
probably never heard of.”

 


From DSC:
If that facility was actually the only one and something happened to it, look at how many things would be impacted as of early May 2023!


 

Examples of generative AI models

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides

Excerpt:

Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.

One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.

 

Work Shift: How AI Might Upend Pay — from bloomberg.com by Jo Constantz

Excerpt:

This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.

“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”

Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.


US Supreme Court rejects computer scientist’s lawsuit over AI-generated inventions — from reuters.com by Blake Brittain

Excerpt:

WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.

The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.


Deep learning pioneer Geoffrey Hinton has quit Google — from technologyreview.com by Will Douglas Heaven
Hinton will be speaking at EmTech Digital on Wednesday.

Excerpt:

Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.

According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.

***


What Is Agent Assist? — from blogs.nvidia.com
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.

Excerpt:

Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.

It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.

From DSC:
Is this type of thing going to provide a learning assistant/agent as well?


A chatbot that asks questions could help you spot when it makes no sense — from technologyreview.com by Melissa Heikkilä
Engaging our critical thinking is one way to stop getting fooled by lying AI.

Excerpt:

AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.

One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.


Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images — from stability.ai

Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images


New AI Powered Denoise in PhotoShop — from jeadigitalmedia.org

In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.


 

 

In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.


Also relevant/see:


 




AutoGPT is the next big thing in AI— from therundown.ai by Rowan Cheung

Excerpt:

AutoGPT has been making waves on the internet recently, trending on both GitHub and Twitter. If you thought ChatGPT was crazy, AutoGPT is about to blow your mind.

AutoGPT creates AI “agents” that operate automatically on their own and complete tasks for you. In case you’ve missed our previous issues covering it, here’s a quick rundown:

    • It’s open-sourced [code]
    • It works by chaining together LLM “thoughts”
    • It has internet access, long-term and short-term memory, access to popular websites, and file storage

.



From DSC:
I want to highlight that paper from Stanford, as I’ve seen it cited several times recently:.

Generative Agents: Interactive Simulacra of Human Behavior -- a paper from Stanford from April 2023


From DSC:
And for a rather fun idea/application of these emerging technologies, see:

  • Quick Prompt: Kitchen Design — from linusekenstam.substack.com by Linus Ekenstam
    Midjourney Prompt. Create elegant kitchen photos using this starting prompt. Make it your own, experiment, add, remove and tinker to create new ideas.

…which made me wonder how we might use these techs in the development of new learning spaces (or in renovating current learning spaces).


From DSC:
On a much different — but still potential — note, also see:

A.I. could lead to a ‘nuclear-level catastrophe’ according to a third of researchers, a new Stanford report finds — from fortune.com by Tristan Bove

Excerpt:

Many experts in A.I. and computer science say the technology is likely a watershed moment for human society. But 36% don’t mean that as a positive, warning that decisions made by A.I. could lead to “nuclear-level catastrophe,” according to researchers surveyed in an annual report on the technology by Stanford University’s Institute for Human-Centered A.I., published earlier this month.


 
 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 
© 2025 | Daniel Christian