AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman
How to make a case for AI experimentation & testing in learning & development


6 AI Tools Recommended By Teachers That Aren’t ChatGPT — from forbes.com by Dan Fitzpatrick

Here are six AI tools making waves in classrooms worldwide:

  • Brisk Teaching
  • SchoolAI
  • Diffit
  • Curipod
  • Skybox by Blockade Labs in ThingLink
  • Ideogram

With insights from educators who are leveraging their potential, let’s explore them in more detail.


AI Is Speeding Up L&D But Are We Losing the Learning? — from learningguild.com by Danielle Wallace

The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:

1. Playtest and Learn About AI
2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…


Some other tools to explore:

Descript: If you can edit text, you can edit videos. — per Bloomberg’s Vlad Savov
Descript is the AI-powered, fully featured, end-to-end video editor that you already know how to use.

A video editor that works like docs and slides
No need to learn a new tool — Descript works like the tools you’ve already learned.

Audeze | Filter — per Bloomberg’s Vlad Savov


AI Chatbots in Schools Findings from a Poll of K-12 Teachers, Students, Parents, and College Undergraduates — from Impact Research; via Michael Spencer and Lily Lee

Key Findings

  • In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
  • While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
  • Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
  • The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.

The Value of AI in Today’s Classrooms — from waltonfamilyfoundation.org

There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.

In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.

A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.

 

 

Hybrid learning through podcasts: a practical approach — from timeshighereducation.com by Catherine Chambers
Adapting practice-based learning to a blend of synchronous and asynchronous delivery gives learners more control and creates opportunities for real-world learning of skills such as podcast production, writes Catherine Chambers

Hybrid learning provides students with greater control over their learning and enables the development of employability skills, supporting practice-based group work through in situ activities.

Aligned with Keele’s curriculum expectations framework, the module was designed around podcasts to support inclusivity, active learning, digital capability and external engagement.

 

LearnLM is Google's new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.

.

 


AI in Education: Google’s LearnLM product has incredible potential — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Google’s Ed Suite is giving Teachers new ideas for incorporating AI into the classroom.

We often talk about what Generative AI will do for coders, healthcare, science or even finance, but what about the benefits for the next generation? Permit me if you will, here I’m thinking about teachers and students.

It’s no secret that some of the most active users of ChatGPT in its heyday, were students. But how are other major tech firms thinking about this?

I actually think one of the best products with the highest ceiling from Google I/O 2024 is LearnLM. It has to be way more than a chatbot, it has to feel like a multimodal tutor. I can imagine frontier model agents (H) doing this fairly well.

What if everyone, everywhere could have their own personal AI tutor, on any topic?


ChatGPT4o Is the TikTok of AI Models — from nickpotkalitsky.substack.com by Nick Potkalitsky
In Search of Better Tools for AI Access in K-12 Classrooms

Nick makes the case that we should pause on the use of OpenAI in the classrooms:

In light of these observations, it’s clear that we must pause and rethink the use of OpenAI products in our classrooms, except for rare cases where accessibility needs demand it. The rapid consumerization of AI, epitomized by GPT4o’s transformation into an AI salesperson, calls for caution.


The Future of AI in Education: Google and OpenAI Strategies Unveiled — from edtechinsiders.substack.comby Ben Kornell

Google’s Strategy: AI Everywhere
Key Points

  • Google will win through seamless Gemini integration across all Google products
  • Enterprise approach in education to make Gemini the default at low/no additional cost
  • Functional use cases and model tuning demonstrate Google’s knowledge of educators

OpenAI’s Strategy: ChatGPT as the Front Door
Key Points

  • OpenAI taking a consumer-led freemium approach to education
  • API powers an app layer that delivers education-specific use cases
  • Betting on a large user base + app marketplace
 
 

Hello GPT-4o — from openai.com
We’re announcing GPT-4o, our new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

Example topics covered here:

  • Two GPT-4os interacting and singing
  • Languages/translation
  • Personalized math tutor
  • Meeting AI
  • Harmonizing and creating music
  • Providing inflection, emotions, and a human-like voice
  • Understanding what the camera is looking at and integrating it into the AI’s responses
  • Providing customer service

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.





From DSC:
I like the assistive tech angle here:





 

 

Learning to Work, Or Working to Learn? — from insidehighered.com by Erin Crisp; via Melanie Booth, Ed.D. on LinkedIn
We need a systems approach to making work-to-learn models just as accessible as traditional learn-to-work pathways, Erin Crisp writes.

Over the past two years, I have had the unique experience of scaling support for a statewide registered teacher-apprenticeship program while also parenting three college-aged sons. The declining appeal of postsecondary education, especially among young men, is evident at my dinner table, in my office, and in my dreams (literally).

Scaling a statewide apprenticeship program for the preparation of teachers has meant that I am consistently hearing from four stakeholder groups—K-12 school district leaders, college and university leaders, aspiring young educators, and local workforce development leaders.

A theme has emerged from my professional life, one that echoes the dinner table conversations happening in my personal life: Society needs systematic work-to-learn pathways in addition to the current learn-to-work ecosystem. This is not an either/or. What we need is a systematic expansion of effort.

In a work-to-learn model, the traditional college sequence is flipped. Instead of starting with general education coursework or survey courses, the working learner is actively engaged in practicing the skills they are interested in acquiring. A workplace supervisor often helps him make connections between the coursework and the job. The learner’s attention is piqued. The learning is relevant. The learner gains confidence, and seeing their influence in the workplace (and paycheck) is satisfying. All of the ARCS model elements are easily achieved.

 

Learning On Purpose | What problem do you want to solve? — from michelleweise.substack.com by Dr. Michelle R. Weise

I quickly decided to take a different tack with my students, and instead asked each of them, “What problem in the world do you think you want to solve? If you could go to a school of hunger, poverty, Alzheimer’s disease, mental health … what kind of school would you want to attend?” This is when they started nodding vigorously.

What each of them identified was a grand challenge, or what Stanford d.school Executive Director Sarah Stein Greenberg has called: purpose learning. In a great talk for Wired, Greenberg asks,

What if students declared missions not majors? Or even better, what if they applied to the School of Hunger or the School of Renewable Energy? These are real problems that society doesn’t have answers to yet. Wouldn’t that fuel their studies with some degree of urgency and meaning and real purpose that they don’t yet have today?

 


Information Age vs Generation Age Technologies for Learning — from opencontent.org by David Wiley

Remember (emphasis DSC)

  • the internet eliminated time and place as barriers to education, and
  • generative AI eliminates access to expertise as a barrier to education.

Just as instructional designs had to be updated to account for all the changes in affordances of online learning, they will need to be dramatically updated again to account for the new affordances of generative AI.


The Curious Educator’s Guide to AI | Strategies and Exercises for Meaningful Use in Higher Ed  — from ecampusontario.pressbooks.pub by Kyle Mackie and Erin Aspenlieder; via Stephen Downes

This guide is designed to help educators and researchers better understand the evolving role of Artificial Intelligence (AI) in higher education. This openly-licensed resource contains strategies and exercises to help foster an understanding of AI’s potential benefits and challenges. We start with a foundational approach, providing you with prompts on aligning AI with your curiosities and goals.

The middle section of this guide encourages you to explore AI tools and offers some insights into potential applications in teaching and research. Along with exposure to the tools, we’ll discuss when and how to effectively build AI into your practice.

The final section of this guide includes strategies for evaluating and reflecting on your use of AI. Throughout, we aim to promote use that is effective, responsible, and aligned with your educational objectives. We hope this resource will be a helpful guide in making informed and strategic decisions about using AI-powered tools to enhance teaching and learning and research.


Annual Provosts’ Survey Shows Need for AI Policies, Worries Over Campus Speech — from insidehighered.com by Ryan Quinn
Many institutions are not yet prepared to help their faculty members and students navigate artificial intelligence. That’s just one of multiple findings from Inside Higher Ed’s annual survey of chief academic officers.

Only about one in seven provosts said their colleges or universities had reviewed the curriculum to ensure it will prepare students for AI in their careers. Thuswaldner said that number needs to rise. “AI is here to stay, and we cannot put our heads in the sand,” he said. “Our world will be completely dominated by AI and, at this point, we ain’t seen nothing yet.”


Is GenAI in education more of a Blackberry or iPhone? — from futureofbeinghuman.com by Andrew Maynard
There’s been a rush to incorporate generative AI into every aspect of education, from K-12 to university courses. But is the technology mature enough to support the tools that rely on it?

In other words, it’s going to mean investing in concepts, not products.

This, to me, is at the heart of an “iPhone mindset” as opposed to a “Blackberry mindset” when it comes to AI in education — an approach that avoids hard wiring in constantly changing technologies, and that builds experimentation and innovation into the very DNA of learning.

For all my concerns here though, maybe there is something to being inspired by the Blackberry/iPhone analogy — not as a playbook for developing and using AI in education, but as a mindset that embraces innovation while avoiding becoming locked in to apps that are detrimentally unreliable and that ultimately lead to dead ends.


Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays — from sciencedirect.com by Johanna Fleckenstein, Jennifer Meyer, Thorben Jansen, Stefan D. Keller, Olaf Köller, and Jens Möller

Highlights

  • Randomized-controlled experiments investigating novice and experienced teachers’ ability to identify AI-generated texts.
  • Generative AI can simulate student essay writing in a way that is undetectable for teachers.
  • Teachers are overconfident in their source identification.
  • AI-generated essays tend to be assessed more positively than student-written texts.

Can Using a Grammar Checker Set Off AI-Detection Software? — from edsurge.com by Jeffrey R. Young
A college student says she was falsely accused of cheating, and her story has gone viral. Where is the line between acceptable help and cheating with AI?


Use artificial intelligence to get your students thinking critically — from timeshighereducation.com by Urbi Ghosh
When crafting online courses, teaching critical thinking skills is crucial. Urbi Ghosh shows how generative AI can shape how educators can approach this


ChatGPT shaming is a thing – and it shouldn’t be — from futureofbeinghuman.com by Andrew Maynard
There’s a growing tension between early and creative adopters of text based generative AI and those who equate its use with cheating. And when this leads to shaming, it’s a problem.

Excerpt (emphasis DSC):

This will sound familiar to anyone who’s incorporating generative AI into their professional workflows. But there are still many people who haven’t used apps like ChatGPT, are largely unaware of what they do, and are suspicious of them. And yet they’ve nevertheless developed strong opinions around how they should and should not be used.

From DSC:
Yes…that sounds like how many faculty members viewed online learning, even though they had never taught online before.

 

Educators help children and teens learn how to identify fake news — from WMUK.org by Kalloli Bhatt and Sue Ellen Christian

Last year at the Kalamazoo Valley Museum, kids could learn about how misinformation is made and how to avoid it. Now the media scholar behind the exhibit is adapting it for libraries.

A new exhibit for libraries

That concern also drove the “Wonder Media” exhibit that ran through last year at the Kalamazoo Valley Museum. Sue Ellen Christian is a communications professor at Western Michigan University. The exhibit was her idea. Full disclosure: I’m a former student of Christian’s. We met in her office on campus.

“It’s really important for our entire society to think about the importance of facts and truth to a democracy,” said Christian. “And without an informed citizenry, we cannot have a healthy democracy.”

Christian recently received a grant from the Institute of Museum and Library Services, based in Washington D.C., to adapt the Wonder Media exhibit for public libraries. It’s designed to reach middle-school-age children.

Mainly, with her grant, Christian wants to develop something for students whose schools do not have librarians anymore. The website associated with the exhibit has resources for students, teachers, and libraries.

 

The Digital Transformation Journey: Lessons For Lawyers Embracing AI — from abovethelaw.com by Olga V. Mack
The journey from the days of leather-bound law books to the digital age — and now toward an AI-driven future — offers valuable lessons for embracing change.

No One Will Miss The ‘Good Old Days’
I have yet to meet a lawyer nostalgic for the days of manually updating law reports or sifting through stacks of books for a single precedent. The convenience, speed, and breadth of digital research tools have made the practice of law more efficient and effective. As we move further into the AI era, the enhancements in predictive analytics, document automation, and legal research will make the “good old days” of even the early digital age seem quaint. The efficiencies and capabilities AI brings to the table are likely to become just as indispensable as online databases are today.

The Way We ‘Law’ Will Change For The Better
The ultimate goal of integrating AI into legal practice isn’t just to replace old methods with new ones; it’s to enhance our ability to serve justice, increase access to legal services, and improve the quality of our work. AI promises to automate mundane tasks, predict legal outcomes with greater accuracy, and unearth insights from vast data. These advancements will free us to focus more on the nuanced, human aspects of law — strategy, empathy, and ethical judgment.


AI to Help Double Legal Tech Market Over Five Years, Gartner Says — from news.bloomberglaw.com by Isabel Gottlieb (behind a paywall)

  • Tech to take up a bigger share of in-house legal spend
  • Generative AI boom has much longer to run

The legal tech market will expand to $50 billion by 2027, driven by the generative artificial intelligence boom, according to an analysis by market research firm Gartner Inc.

That growth, up from about $23 billion in 2022, will be driven by continued law firm spending on AI legal tech, as well as in-house departments allocating more of their overall budgets to technology, said Chris Audet, chief of research in Gartner’s legal, risk and compliance leaders practice. The market size prediction, released publicly on Thursday, comes from a late-2023 analysis for Gartner clients, and the 2022 market size comes from …


Legal Tech Market To See Huge Lift Off Thanks to GenAI — from digit.fyi by Elizabeth Greenberg

The global legal technology market has grown significantly in recent years and generative AI (GenAI) will accelerate this growth, meaning the market will reach $50 billion in value by 2027, according to Gartner.

“GenAI has huge potential for bringing more automation to the legal space,” said Chris Audet, chief of research in the Gartner for legal, risk & compliance leaders practice.

“Rapid GenAI developments, and the widespread availability of consumer tools such as OpenAI’s ChatGPT and Google’s Bard, will quickly increase the number of established legal technology use cases, in turn creating growing market conditions for an increasing number of legal-focused tools.”

“New technologies can fundamentally change the way legal organizations do business, and GenAI has enormous potential to do this,” an analyst at Gartner said.


Revolutionizing Legal Tech in 48 Hours — from law.stanford.edu by Monica Schreiber
At CodeX Hackathon, SLS Students Help Create Award-Winning AI Tools to Help Veterans and Streamline M&A

Disabled veterans seeking to file claims with the Veterans Administration are faced with multiple hurdles and reams of paperwork. Many vets resort to paying third-party companies thousands of dollars to help them with the process.

What if there were a way to streamline the claims process—to condense burdensome information gathering and data inputting into a linear, simplified set of tasks guided by a chatbot? How long would it take to roll out a tool that could accomplish that?

The answer: about 48 hours—at least for an interdisciplinary team of students from Stanford University’s schools of Law, Business, and Computer Science collaborating feverishly during Codex’s Large Language Model (LLM) Hackathon held recently on campus.


What If Your Law Firm Had A Blank Page For Legal Tech? — from artificiallawyer.com

f law firms had a blank page for legal technology and innovation, what would they do?

While organisations across all sectors are getting to grips with the opportunities and risks posed by genAI, forward-thinking law firm leaders are considering what it means for their businesses – today, tomorrow, and the day after tomorrow.

But some firms remain constrained by yesterday, due to legacy processes, ways of working and mindsets. To create the conditions for change, firms need to adopt a ‘blank page’ approach and review all areas of their businesses by asking: if we were starting afresh, how would we design the organisation to future-proof it to achieve transformative growth with genAI at the core?

From DSC:
This sentence reminds me of the power of culture:

But some firms remain constrained by yesterday, due to legacy processes, ways of working and mindsets.


Fresh Voices on Legal Tech with Sarah Glassmeyer — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Sarah Glassmeyer

What if, instead of tech competence being this scary, overwhelming thing, we showed lawyers how to engage with technology in a more lighthearted, even playful, way? The reality is—tech competency doesn’t have an endpoint, but the process of continuous learning shouldn’t be dull and confusing. Sarah Glassmeyer joins Dennis and Tom to talk about her perspectives on technology education for attorneys, the latest trends in the legal tech world and new AI developments, and growing your knowledge of technology by building on small skills, one at a time.
.

 


How Legal Technology Can Add Value to an M&A Practice — from lexology.com

Following is a primer on some of the A.I.-driven legal technologies, from contract review and automated due-diligence solutions to deal collaboration and closing-management tools, that can drive productivity and efficiency during the four phases of an M&A transaction, as well as enhance market insight and client service.

 

 

 

Description:

I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.


From DSC:
This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.


 

Instructors as Innovators: a Future-focused Approach to New AI Learning Opportunities, With Prompts –from papers.ssrn.com by Ethan R. Mollick and Lilach Mollick

Abstract

This paper explores how instructors can leverage generative AI to create personalized learning experiences for students that transform teaching and learning. We present a range of AI-based exercises that enable novel forms of practice and application including simulations, mentoring, coaching, and co-creation. For each type of exercise, we provide prompts that instructors can customize, along with guidance on classroom implementation, assessment, and risks to consider. We also provide blueprints, prompts that help instructors create their own original prompts. Instructors can leverage their content and pedagogical expertise to design these experiences, putting them in the role of builders and innovators. We argue that this instructor-driven approach has the potential to democratize the development of educational technology by enabling individual instructors to create AI exercises and tools tailored to their students’ needs. While the exercises in this paper are a starting point, not a definitive solutions, they demonstrate AI’s potential to expand what is possible in teaching and learning.

 

The Curiosity Matrix: 9 Habits of Curious Minds — from nesslabs.com by Anne-Laure Le Cunff; via Roberto Ferraro

As an adaptive trait, curiosity draws us to seek information and new experiences. It’s how we learn about ourselves, others, and the world.

They’re a diverse group of people, but the literature suggests that they share some common habits that support their personal and professional growth.

 

Beyond the Hype: Taking a 50 Year Lens to the Impact of AI on Learning — from nafez.substack.com by Nafez Dakkak and Chris Dede
How do we make sure LLMs are not “digital duct tape”?

[Per Chris Dede] We often think of the product of teaching as the outcome (e.g. an essay, a drawing, etc.). The essence of education, in my view, lies not in the products or outcomes of learning but in the journey itself. The artifact is just a symbol that you’ve taken the journey.

The process of learning — the exploration, challenges, and personal growth that occur along the way — is where the real value lies. For instance, the act of writing an essay is valuable not merely for the final product but for the intellectual journey it represents. It forces you to improve and organize your thinking on a subject.

This distinction becomes important with the rise of generative AI, because it uniquely allows us to produce these artifacts without taking the journey.

As I’ve argued previously, I am worried that all this hype around LLMs renders them a “type of digital duct-tape to hold together an obsolete industrial-era educational system”. 


Speaking of AI in our learning ecosystems, also see:


On Building a AI Policy for Teaching & Learning — from by Lance Eaton
How students drove the development of a policy for students and faculty

Well, last month, the policy was finally approved by our Faculty Curriculum Committee and we can finally share the final version: AI Usage Policy. College Unbound also created (all-human, no AI used) a press release with the policy and some of the details.

To ensure you see this:

  • Usage Guidelines for AI Generative Tools at College Unbound
    These guidelines were created and reviewed by College Unbound students in Spring 2023 with the support of Lance Eaton, Director of Faculty Development & Innovation.  The students include S. Fast, K. Linder-Bey, Veronica Machado, Erica Maddox, Suleima L., Lora Roy.

ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds — from psypost.org by Eric W. Dolan

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work. The study, published in the Canadian Psychological Association’s Mind Pad, found that “false citation rates” across various psychology subfields ranged from 6% to 60%. Surprisingly, these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals.

 
© 2024 | Daniel Christian