Understanding behavior as communication: A teacher’s guide — from understood.org by Amanda Morin
Figuring out the function of, or the reasons behind, a behavior is critical for finding an appropriate response or support. Knowing the function can also help you find ways to prevent behavior issues in the future.

Think of the last time a student called out in class, pushed in line, or withdrew by putting their head down on their desk. What was their behavior telling you?

In most cases, behavior is a sign they may not have the skills to tell you what they need. Sometimes, students may not even know what they need. What are your students trying to communicate? What do they need, and how can you help?

One way to reframe your thinking is to respond to the student, not the behavior. Start by considering the life experiences that students bring to the classroom.

Some students who learn and think differently have negative past experiences with teachers and school. Others may come from cultures in which speaking up for their needs in front of the whole class isn’t appropriate.


Also relevant/see:

Exclusive: Watchdog finds Black girls face more frequent, severe discipline in school— from npr.org by Claudia Grisales

Black girls face more discipline and more severe punishments in public schools than girls from other racial backgrounds, according to a groundbreaking new report set for release Thursday by a congressional watchdog.

The report, shared exclusively with NPR, took nearly a year-and-a-half to complete and comes after several Democratic congressional members requested the study.

 

This article….

Artificial Intelligence and Schools: When Tech Makers and Educators Collaborate, AI Doesn’t Have to be Scary — from the74million.org by Edward Montalvo
AI is already showing us how to make education more individualized and equitable.

The XQ Institute shares this mindset as part of our mission to reimagine the high school learning experience so it’s more relevant and engaging for today’s learners, while better preparing them for the future. We see AI as a tool with transformative potential for educators and makers to leverage — but only if it’s developed and implemented with ethics, transparency and equity at the forefront. That’s why we’re building partnerships between educators and AI developers to ensure that products are shaped by the real needs and challenges of students, teachers and schools. Here’s how we believe all stakeholders can embrace the Department’s recommendations through ongoing collaborations with tech leaders, educators and students alike.

…lead me to the XQ Institute, and I very much like what I’m initially seeing! Here are some excerpts from their website:

 


 

FlexOS’ Stay Ahead Edition #43 — from flexos.work

People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:

  • Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
  • Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
  • Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
  • Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
  • …plus several other items

The posting also lists several ideas to try with NotebookLM such as:

Idea 2: Study Companion

  • Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
  • Get a breakdown of the course materials to understand them better.

Google’s NotebookLM: A Game-Changer for Education and Beyond — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
AI Tools: Breaking down Google’s latest AI tool and its implications for education.

“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”

With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.


Meet Google NotebookLM: 10 things to know for educators — from ditchthattextbook.com by Matt Miller

1. Upload a variety of sources for NotebookLM to use. 
You can use …

  • websites
  • PDF files
  • links to websites
  • any text you’ve copied
  • Google Docs and Slides
  • even Markdown

You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).

2. Ask it to create resources.
3. Create an audio summary.
4. Chat with your sources.
5. Save (almost) everything. 


NotebookLM summarizes my dissertation — from darcynorman.net by D’Arcy Norman, PhD

I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.

The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.


4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter
As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.

As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:

  • Incorporate personal experiences and local content into assignments
  • Ask students for multi-modal deliverables
  • Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
  • Consider real-time and oral assignments

Google CEO Sundar Pichai announces $120M fund for global AI education — from techcrunch.com by Anthony Ha

He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.


Educators discuss the state of creativity in an AI world — from gettingsmart.com by Joe & Kristin Merrill, LaKeshia Brooks, Dominique’ Harbour, Erika Sandstrom

Key Points

  • AI allows for a more personalized learning experience, enabling students to explore creative ideas without traditional classroom limitations.
  • The focus of technology integration should be on how the tool is used within lessons, not just the tool itself

Addendum on 9/27/24:

Google’s NotebookLM enhances AI note-taking with YouTube, audio file sources, sharable audio discussions — from techcrunch.com by Jagmeet Singh

Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions

NotebookLM adds audio and YouTube support, plus easier sharing of Audio Overviews — from blog.google

 

10 Ways I Use LLMs like ChatGPT as a Professor — from automatedteach.com by Graham Clay
ChatGPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, custom GPTs – you name it, I use it. Here’s how…

Excerpt:

  1. To plan lessons (especially activities)
  2. To create course content (especially quizzes)
  3. To tutor my students
  4. To grade faster and give better feedback
  5. To draft grant applications
  6. Plus 5 other items

From Caution to Calcification to Creativity: Reanimating Education with AI’s Frankenstein Potential — from nickpotkalitsky.substack.com by Nick Potkalitsky
A Critical Analysis of AI-Assisted Lesson Planning: Evaluating Efficacy and Pedagogical Implications

Excerpt (emphasis DSC):

As we navigate the rapidly evolving landscape of artificial intelligence in education, a troubling trend has emerged. What began as cautious skepticism has calcified into rigid opposition. The discourse surrounding AI in classrooms has shifted from empirical critique to categorical rejection, creating a chasm between the potential of AI and its practical implementation in education.

This hardening of attitudes comes at a significant cost. While educators and policymakers debate, students find themselves caught in the crossfire. They lack safe, guided access to AI tools that are increasingly ubiquitous in the world beyond school walls. In the absence of formal instruction, many are teaching themselves to use these tools, often in less than productive ways. Others live in a state of constant anxiety, fearing accusations of AI reliance in their work. These are just a few symptoms of an overarching educational culture that has become resistant to change, even as the world around it transforms at an unprecedented pace.

Yet, as this calcification sets in, I find myself in a curious position: the more I thoughtfully integrate AI into my teaching practice, the more I witness its potential to enhance and transform education


NotebookLM and Google’s Multimodal Vision for AI-Powered Learning Tools — from marcwatkins.substack.com by Marc Watkins

A Variety of Use Cases

  • Create an Interactive Syllabus
  • Presentation Deep Dive: Upload Your Slides
  • Note Taking: Turn Your Chalkboard into a Digital Canvas
  • Explore a Reading or Series of Readings
  • Help Navigating Feedback
  • Portfolio Building Blocks

Must-Have Competencies and Skills in Our New AI World: A Synthesis for Educational Reform — from er.educause.edu by Fawzi BenMessaoud
The transformative impact of artificial intelligence on educational systems calls for a comprehensive reform to prepare future generations for an AI-integrated world.

The urgency to integrate AI competencies into education is about preparing students not just to adapt to inevitable changes but to lead the charge in shaping an AI-augmented world. It’s about equipping them to ask the right questions, innovate responsibly, and navigate the ethical quandaries that come with such power.

AI in education should augment and complement their aptitude and expertise, to personalize and optimize the learning experience, and to support lifelong learning and development. AI in education should be a national priority and a collaborative effort among all stakeholders, to ensure that AI is designed and deployed in an ethical, equitable, and inclusive way that respects the diversity and dignity of all learners and educators and that promotes the common good and social justice. AI in education should be about the production of AI, not just the consumption of AI, meaning that learners and educators should have the opportunity to learn about AI, to participate in its creation and evaluation, and to shape its impact and direction.

 

Top Software Engineering Newsletters in 2024 — from ai-supremacy.com by Michael Spencer
Including a very select few ML, AI and product Newsletters into the mix for Software Engineers.

This is an article specifically for the software engineers and developers among you.

In the past year (2023-2024) professionals are finding more value in Newsletters than ever before (especially on Substack).

As working from home took off, the nature of mentorship and skill acquisition has also evolved and shifted. Newsletters with pragmatic advice on our careers it turns out, are super valuable. This article is a resource list. Are you a software developer, work with one or know someone who is or wants to be?

 



“Who to follow in AI” in 2024? — from ai-supremacy.com by Michael Spencer
Part III – #35-55 – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

This list features both some of the best Newsletters on AI and people who make LinkedIn posts about AI papers, advances and breakthroughs. In today’s article we’ll be meeting the first 19-34, in a list of 180+.

Newsletter Writers
YouTubers
Engineers
Researchers who write
Technologists who are Creators
AI Educators
AI Evangelists of various kinds
Futurism writers and authors

I have been sharing the list in reverse chronological order on LinkedIn here.


Inside Google’s 7-Year Mission to Give AI a Robot Body — from wired.com by Hans Peter Brondmo
As the head of Alphabet’s AI-powered robotics moonshot, I came to believe many things. For one, robots can’t come soon enough. For another, they shouldn’t look like us.


Learning to Reason with LLMs — from openai.com
We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.


Items re: Microsoft Copilot:

Also see this next video re: Copilot Pages:


Sal Khan on the critical human skills for an AI age — from time.com by Kevin J. Delaney

As a preview of the upcoming Summit interview, here are Khan’s views on two critical questions, edited for space and clarity:

  1. What are the enduring human work skills in a world with ever-advancing AI? Some people say students should study liberal arts. Others say deep domain expertise is the key to remaining professionally relevant. Others say you need to have the skills of a manager to be able to delegate to AI. What do you think are the skills or competencies that ensure continued relevance professionally, employability, etc.?
  2. A lot of organizations are thinking about skills-based approaches to their talent. It involves questions like, ‘Does someone know how to do this thing or not?’ And what are the ways in which they can learn it and have some accredited way to know they actually have done it? That is one of the ways in which people use Khan Academy. Do you have a view of skills-based approaches within workplaces, and any thoughts on how AI tutors and training fit within that context?

 



Introducing OpenAI o1 – from openai.com

We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.




Something New: On OpenAI’s “Strawberry” and Reasoning — from oneusefulthing.org by Ethan Mollick
Solving hard problems in new ways

The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.

To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.


What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack

The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.

Recently, many creators (myself included) have been exploring super realistic AI more and more.

But where can this actually be used?

Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.

Heather goes on to mention applications in:

  • Creative Industries
  • Entertainment and Media
  • Education and Training

NotebookLM now lets you listen to a conversation about your sources — from blog.google by Biao Wang
Our new Audio Overview feature can turn documents, slides, charts and more into engaging discussions with one click.

Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.


Bringing generative AI to video with Adobe Firefly Video Model — from blog.adobe.com by Ashley Still

Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.

We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

 

The Most Popular AI Tools for Instructional Design (September, 2024) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The tools we use most, and how we use them

This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.

My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?

Here’s where we are in September, 2024:


Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby,  Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)

As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.



The Impact of AI in Advancing Accessibility for Learners with Disabilities — from er.educause.edu by Rob Gibson

AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.


 

From DSC:
Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).


From impossible to POSSIBLE: Tata Consultancy Services uses Adobe Firefly generative AI and Acrobat AI Assistant to turn hours of work into minutes — from blog.adobe.com

This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.

For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.

The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.

It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.

By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.

 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

What Students Want: Key Results from DEC Global AI Student Survey 2024 — from digitaleducationcouncil.com by Digital Education Council

  • 86% of students globally are regularly using AI in their studies, with 54% of them using AI on a weekly basis, the recent Digital Education Council Global AI Student Survey found.
  • ChatGPT was found to be the most widely used AI tool, with 66% of students using it, and over 2 in 3 students reported using AI for information searching.
  • Despite their high rates of AI usage, 1 in 2 students do not feel AI ready. 58% reported that they do not feel that they had sufficient AI knowledge and skills, and 48% do not feel adequately prepared for an AI-enabled workplace.

Chatting with WEF about ChatGPT in the classroom — from futureofbeinghuman.com by Andrew Maynard
A short video on generative AI in education from the World Economic Forum


The Post-AI Instructional Designer — from drphilippahardman.substack.com by Dr. Philippa Hardman
How the ID role is changing, and what this means for your key skills, roles & responsibilities

Specifically, the study revealed that teachers who reported most productivity gains were those who used AI not just for creating outputs (like quizzes or worksheets) but also for seeking input on their ideas, decisions and strategies.

Those who engaged with AI as a thought partner throughout their workflow, using it to generate ideas, define problems, refine approaches, develop strategies and gain confidence in their decisions gained significantly more from their collaboration with AI than those who only delegated functional tasks to AI.  


Leveraging Generative AI for Inclusive Excellence in Higher Education — from er.educause.edu by Lorna Gonzalez, Kristi O’Neil-Gonzalez, Megan Eberhardt-Alstot, Michael McGarry and Georgia Van Tyne
Drawing from three lenses of inclusion, this article considers how to leverage generative AI as part of a constellation of mission-centered inclusive practices in higher education.

The hype and hesitation about generative artificial intelligence (AI) diffusion have led some colleges and universities to take a wait-and-see approach.Footnote1 However, AI integration does not need to be an either/or proposition where its use is either embraced or restricted or its adoption aimed at replacing or outright rejecting existing institutional functions and practices. Educators, educational leaders, and others considering academic applications for emerging technologies should consider ways in which generative AI can complement or augment mission-focused practices, such as those aimed at accessibility, diversity, equity, and inclusion. Drawing from three lenses of inclusion—accessibility, identity, and epistemology—this article offers practical suggestions and considerations that educators can deploy now. It also presents an imperative for higher education leaders to partner toward an infrastructure that enables inclusive practices in light of AI diffusion.

An example way to leverage AI:

How to Leverage AI for Identity Inclusion
Educators can use the following strategies to intentionally design instructional content with identity inclusion in mind.

  • Provide a GPT or AI assistant with upcoming lesson content (e.g., lecture materials or assignment instructions) and ask it to provide feedback (e.g., troublesome vocabulary, difficult concepts, or complementary activities) from certain perspectives. Begin with a single perspective (e.g., first-time, first-year student), but layer in more to build complexity as you interact with the GPT output.

Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly
As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.


7 Ways to Use AI Music in Your Classroom — from classtechtips.com by Monica Burns


Change blindness — from oneusefulthing.org by Ethan Mollick
21 months later

I don’t think anyone is completely certain about where AI is going, but we do know that things have changed very quickly, as the examples in this post have hopefully demonstrated. If this rate of change continues, the world will look very different in another 21 months. The only way to know is to live through it.


My AI Breakthrough — from mgblog.org by Miguel Guhlin

Over the subsequent weeks, I’ve made other adjustments, but that first one was the one I asked myself:

  1. What are you doing?
  2. Why are you doing it that way?
  3. How could you change that workflow with AI?
  4. Applying the AI to the workflow, then asking, “Is this what I was aiming for? How can I improve the prompt to get closer?”
  5. Documenting what worked (or didn’t). Re-doing the work with AI to see what happened, and asking again, “Did this work?”

So, something that took me WEEKS of hard work, and in some cases I found impossible, was made easy. Like, instead of weeks, it takes 10 minutes. The hard part? Building the prompt to do what I want, fine-tuning it to get the result. But that doesn’t take as long now.

 

One thing often happens at keynotes and conferences. It surprised me…. — from donaldclarkplanb.blogspot.com by Donald Clark

AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.

    • Text-to-Speech & Speech-to-Text Tools…
    • Grammar and Spelling Assistants…
    • Comprehension Tools…
    • Visual and Multisensory Tools…
    • …and more

Let’s Make a Movie Teaser With AI — from whytryai.com by Daniel Nest
How to use free generative AI tools to make a teaser trailer.

Here are the steps and the free tools we can use for each.

  1. Brainstorm ideas & flesh out the concept.
    1. Claude 3.5 Sonnet
    2. Google Gemini 1.5 Pro
    3. …or any other free LLM
  2. Create starting frames for each scene.
    1. FLUX.1 Pro
    2. Ideogram
    3. …or any other free text-to-image model
  3. Bring the images to life.
    1. Kling AI
    2. Luma Dream Machine
    3. Runway Gen-2
  4. Generate the soundtrack.
    1. Udio
    2. Suno
  5. Add sound effects.
    1. ElevenLabs Sound Effects
    2. ElevenLabs VideoToSoundEffects
    3. Meta Audiobox
  6. Put everything together.
    1. Microsoft Clipchamp
    2. DaVinci Resolve
    3. …or any other free video editing tool.

Here we go.


Is AI in Schools Promising or Overhyped? Potentially Both, New Reports Suggest — from the74million.org by Greg Toppo; via Claire Zau
One urges educators to prep for an artificial intelligence boom. The other warns that it could all go awry. Together, they offer a reality check.

Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?

Or is it, perhaps, both?

Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.


Bite-Size AI Content for Faculty and Staff — from aiedusimplified.substack.com by Lance Eaton
Another two 5-tips videos for faculty and my latest use case: creating FAQs!

I had an opportunity recently to do more of my 15-minute lightning talks. You can see my lightning talks from late winter in this post, or can see all of them on my YouTube channel. These two talks were focused on faculty in particular.


Also from Lance, see:


AI in Education: Leading a Paradigm Shift — from gettingsmart.com by Dr. Tyler Thigpen

Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?

I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?


California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine
The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.

California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.


Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.

Per The Neuron

Some cool use cases:

  • Judy Garland can teach you biology while walking to class.
  • James Dean can narrate your steamy romance novel.
  • Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!

Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.


How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg
With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.

As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.

These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”

Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.


Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp
Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.

 

Augmented Course Design: Using AI to Boost Efficiency and Expand Capacity — from er.educause.edu by Berlin Fang and Kim Broussard
The emerging class of generative AI tools has the potential to significantly alter the landscape of course development.

Using generative artificial intelligence (GenAI) tools such as ChatGPT, Gemini, or CoPilot as intelligent assistants in instructional design can significantly enhance the scalability of course development. GenAI can significantly improve the efficiency with which institutions develop content that is closely aligned with the curriculum and course objectives. As a result, institutions can more effectively meet the rising demand for flexible and high-quality education, preparing a new generation of future professionals equipped with the knowledge and skills to excel in their chosen fields.1 In this article, we illustrate the uses of AI in instructional design in terms of content creation, media development, and faculty support. We also provide some suggestions on the effective and ethical uses of AI in course design and development. Our perspectives are rooted in medical education, but the principles can be applied to any learning context.

Table 1 summarizes a few low-hanging fruits in AI usage in course development.
.

Table 1. Types of Use of GenAI in Course Development
Practical Use of AI Use Scenarios and Examples
Inspiration
  • Exploring ideas for instructional strategies
  • Exploring ideas for assessment
  • Course mapping
  • Lesson or unit content planning
Supplementation
  • Text to audio
  • Transcription for audio
  • Alt text auto-generation
  • Design optimization (e.g., using Microsoft PPT Design)
Improvement
  • Improving learning objectives
  • Improving instructional materials
  • Improving course content writing (grammar, spelling, etc.)
Generation
  • Creating a PowerPoint draft using learning objectives
  • Creating peripheral content materials (introductions, conclusions)
  • Creating decorative images for content
Expansion
  • Creating a scenario based on learning objectives
  • Creating a draft of a case study
  • Creating a draft of a rubric

.


Also see:

10 Ways Artificial Intelligence Is Transforming Instructional Design — from er.educause.edu by Rob Gibson
Artificial intelligence (AI) is providing instructors and course designers with an incredible array of new tools and techniques to improve the course design and development process. However, the intersection of AI and content creation is not new.

I have been telling my graduate instructional design students that AI technology is not likely to replace them any time soon because learning and instruction are still highly personalized and humanistic experiences. However, as these students embark on their careers, they will need to understand how to appropriately identify, select, and utilize AI when developing course content. Examples abound of how instructional designers are experimenting with AI to generate and align student learning outcomes with highly individualized course activities and assessments. Instructional designers are also using AI technology to create and continuously adapt the custom code and power scripts embedded into the learning management system to execute specific learning activities.Footnote1 Other useful examples include scripting and editing videos and podcasts.

Here are a few interesting examples of how AI is shaping and influencing instructional design. Some of the tools and resources can be used to satisfy a variety of course design activities, while others are very specific.


Taking the Lead: Why Instructional Designers Should Be at the Forefront of Learning in the Age of AI — from medium.com by Rob Gibson
Education is at a critical juncture and needs to draw leaders from a broader pool, including instructional designers

The world of a medieval stone cutter and a modern instructional designer (ID) may seem separated by a great distance, but I wager any ID who upon hearing the story I just shared would experience an uneasy sense of déjà vu. Take away the outward details, and the ID would recognize many elements of the situation: the days spent in projects that fail to realize the full potential of their craft, the painful awareness that greater things can be built, but are unlikely to occur due to a poverty of imagination and lack of vision among those empowered to make decisions.

Finally, there is the issue of resources. No stone cutter could ever hope to undertake a large-scale enterprise without a multitude of skilled collaborators and abundant materials. Similarly, instructional designers are often departments of one, working in scarcity environments, with limited ability to acquire resources for ambitious projects and — just as importantly — lacking the authority or political capital needed to launch significant initiatives. For these reasons, instructional design has long been a profession caught in an uncomfortable stasis, unable to grow, evolve and achieve its full potential.

That is until generative AI appeared on the scene. While the discourse around AI in education has been almost entirely about its impact on teaching and assessment, there has been a dearth of critical analysis regarding AI’s potential for impacting instructional design.

We are at a critical juncture for AI-augmented learning. We can either stagnate, missing opportunities to support learners while educators continue to debate whether the use of generative AI tools is a good thing, or we can move forward, building a transformative model for learning akin to the industrial revolution’s impact.

Too many professional educators remain bound by traditional methods. The past two years suggest that leaders of this new learning paradigm will not emerge from conventional educational circles. This vacuum of leadership can be filled, in part, by instructional designers, who are prepared by training and experience to begin building in this new learning space.

 

Gemini makes your mobile device a powerful AI assistant — from blog.google
Gemini Live is available today to Advanced subscribers, along with conversational overlay on Android and even more connected apps.

Rolling out today: Gemini Live <– Google swoops in before OpenAI can get their Voice Mode out there
Gemini Live is a mobile conversational experience that lets you have free-flowing conversations with Gemini. Want to brainstorm potential jobs that are well-suited to your skillset or degree? Go Live with Gemini and ask about them. You can even interrupt mid-response to dive deeper on a particular point, or pause a conversation and come back to it later. It’s like having a sidekick in your pocket who you can chat with about new ideas or practice with for an important conversation.

Gemini Live is also available hands-free: You can keep talking with the Gemini app in the background or when your phone is locked, so you can carry on your conversation on the go, just like you might on a regular phone call. Gemini Live begins rolling out today in English to our Gemini Advanced subscribers on Android phones, and in the coming weeks will expand to iOS and more languages.

To make speaking to Gemini feel even more natural, we’re introducing 10 new voices to choose from, so you can pick the tone and style that works best for you.

.

Per the Rundown AI:
Why it matters: Real-time voice is slowly shifting AI from a tool we text/prompt with, to an intelligence that we collaborate, learn, consult, and grow with. As the world’s anticipation for OpenAI’s unreleased products grows, Google has swooped in to steal the spotlight as the first to lead widespread advanced AI voice rollouts.

Beyond Social Media: Schmidt Predicts AI’s Earth-Shaking Impact— from wallstreetpit.com
The next wave of AI is coming, and if Schmidt is correct, it will reshape our world in ways we are only beginning to imagine.

In a recent Q&A session at Stanford, Eric Schmidt, former CEO and Chairman of search giant Google, offered a compelling vision of the near future in artificial intelligence. His predictions, both exciting and sobering, paint a picture of a world on the brink of a technological revolution that could dwarf the impact of social media.

Schmidt highlighted three key advancements that he believes will converge to create this transformative wave: very large context windows, agents, and text-to-action capabilities. These developments, according to Schmidt, are not just incremental improvements but game-changers that could reshape our interaction with technology and the world at large.

.


The rise of multimodal AI agents— from 11onze.cat
Technology companies are investing large amounts of money in creating new multimodal artificial intelligence models and algorithms that can learn, reason and make decisions autonomously after collecting and analysing data.

The future of multimodal agents
In practical terms, a multimodal AI agent can, for example, analyse a text while processing an image, spoken language, or an audio clip to give a more complete and accurate response, both through voice and text. This opens up new possibilities in various fields: from education and healthcare to e-commerce and customer service.


AI Change Management: 41 Tactics to Use (August 2024)— from flexos.work by Daan van Rossum
Future-proof companies are investing in driving AI adoption, but many don’t know where to start. The experts recommend these 41 tips for AI change management.

As Matt Kropp told me in our interview, BCG has a 10-20-70 rule for AI at work:

  • 10% is the LLM or algorithm
  • 20% is the software layer around it (like ChatGPT)
  • 70% is the human factor

This 70% is exactly why change management is key in driving AI adoption.

But where do you start?

As I coach leaders at companies like Apple, Toyota, Amazon, L’Oréal, and Gartner in our Lead with AI program, I know that’s the question on everyone’s minds.

I don’t believe in gatekeeping this information, so here are 41 principles and tactics I share with our community members looking for winning AI change management principles.


 

How Generative AI will change what lawyers do — from jordanfurlong.substack.com by Jordan Furlong
As we enter the Age of Accessible Law, a wave of new demand is coming our way — but AI will meet most of the surge. What will be left for lawyers? Just the most valuable and irreplaceable role in law.

AI can already provide actionable professional advice; within the next ten years, if it takes that long, I believe it will offer acceptable legal advice. No one really wants “AI courts,” but soon enough, we’ll have AI-enabled mediation and arbitration, which will have a much greater impact on everyday dispute resolution.

I think it’s dangerous to assume that AI will never be able to do something that lawyers now do. “Never” is a very long time. And AI doesn’t need to replicate the complete arsenal of the most gifted lawyer out there. If a Legal AI can replicate 80% of what a middling lawyer can do, for 10% of the cost, in 1% of the time, that’s all the revolution you’ll need.

From DSC:
It is my sincere hope that AI will open up the floodgates to FAR great Access to Justice (A2J) in the future.


It’s the Battle of the AI Legal Assistants, As LexisNexis Unveils Its New Protégé and Thomson Reuters Rolls Out CoCounsel 2.0 — from lawnext.com by Bob Ambrogi

It’s not quite BattleBots, but competitors LexisNexis and Thomson Reuters both made significant announcements today involving the development of generative AI legal assistants within their products.

Thomson Reuters, which last year acquired the CoCounsel legal assistant originally developed by Casetext, and which later announced plans to deploy it throughout its product lines, today unveiled what it says is the “supercharged” CoCounsel 2.0.

Meanwhile, LexisNexis said today it is rolling out the commercial preview version of its Protégé Legal AI Assistant, which it describes as a “substantial leap forward in personalized generative AI that will transform legal work.” It is part of the launch of the third generation of Lexis+ AI, the AI-driven legal research platform the company launched last year.


Thomson Reuters Launches CoCounsel 2.0 — from abovethelaw.com by Joe Patrice
New release promises results three times faster than the last version.

It seems like just last year we were talking about CoCounsel 1.0, the generative AI product launched by Casetext and then swiftly acquired by Thomson Reuters. That’s because it was just last year. Since then, Thomson Reuters has worked to marry Casetext’s tool with TR’s treasure trove of data.

It’s not an easy task. A lot of the legal AI conversation glosses over how constructing these tools requires a radical confrontation with the lawyers’ mind. Why do attorneys do what they do every day? Are there seemingly “inefficient” steps that actually serve a purpose? Does an AI “answer” advance the workflow or hinder the research alchemy? As recently as April, Thomson Reuters was busy hyping the fruits of its efforts to get ahead of these challenges.


Though this next item is not necessarily related to legaltech, it’s still relevant to the legal realm:

A Law Degree Is No Sure Thing— from cew.georgetown.edu
Some Law School Graduates Earn Top Dollar, but Many Do Not

Summary
Is law school worth it? A Juris Doctor (JD) offers high median earnings and a substantial earnings boost relative to a bachelor’s degree in the humanities or social sciences—two of the more common fields of study that lawyers pursue as undergraduate students. However, graduates of most law schools carry substantial student loan debt, which dims the financial returns associated with a JD.

A Law Degree Is No Sure Thing: Some Law School Graduates Earn Top Dollar, but Many Do Not finds that the return on investment (ROI) in earnings and career outcomes varies widely across law schools. The median earnings net of debt payments are $72,000 four years after graduation for all law school graduates, but exceed $200,000 at seven law schools. By comparison, graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

From DSC:
A former boss’ husband was starting up a local public defender’s office in Michigan and needed to hire over two dozen people. The salaries were in the $40K’s she said. This surprised me greatly, as I thought all lawyers were bringing in the big bucks. This is not the case, clearly. Many lawyers do not make the big bucks, as this report shows:

…graduates of 33 law schools earn less than $55,000 net of debt payments four years after graduation.

.

Also relevant/see:

 
© 2024 | Daniel Christian