Why are we doing this work? Over the past two years, the U.S. Department of Education has been committed to maintaining an ongoing conversation with educators, students, researchers, developers — and the educational community at large — related to the continuous progress of Artificial Intelligence (AI) development and its implications for teaching and learning.
Many educators are seeking resources clarifying what AI is and how it will impact their work and their students. Similarly, developers of educational technology (“edtech”) products seek guidance on what guardrails exist that can support their efforts. After the release of our May 2023 report Artificial Intelligence and the Future of Teaching and Learning, we heard the desire for more.
Moving from reaction to action, higher education stakeholders are currently exploring the opportunities afforded by AI for teaching, learning, and work while maintaining a sense of caution for the vast array of risks AI-powered technologies pose. To aid in these efforts, we present this inaugural EDUCAUSE AI Landscape Study, in which we summarize the higher education community’s current sentiments and experiences related to strategic planning and readiness, policies and procedures, workforce, and the future of AI in higher education.
Educational administrators should not worry about every AI development, but should, instead focus on the big picture, as those big picture changes will change the entire world and the educational system.
AI and related technologies (robotics, synthetic biology, and brain-computer interfaces) will continue to impact society and the entire educational system over the next 10 years. This impact on the system will be greater than anything that has happened over the last 100 years, including COVID-19, as COVID-19 eventually ended and the disruptive force of these technologies will only continue to develop.
AI is the bull in the China Shop, redefining the world and the educational system. Students writing a paper with AI is barely a poke in the educational world relative to what is starting to happen (active AI teachers and tutors; AI assessment; AI glasses; immersive learning environments; young students able to start their own business with AI tools; AIs replacing and changing jobs; deep voice and video fakes; intelligence leveling; individualized instruction; interactive and highly intelligent computers; computers that can act autonomously; and more).
verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
Soon, every employee will be both AI builder and AI consumer— from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn “Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.
The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”
Along these lines, see:
TECH TRENDS 2024 — from deloitte.com Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.
The ruling follows a similar decision denying patent registrations naming AI as creators.
The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.
The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.
…
The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.
On this same topic, also see:
? The historic NYT v. @OpenAI lawsuit filed this morning, as broken down by me, an IP and AI lawyer, general counsel, and longtime tech person and enthusiast.
Tl;dr – It’s the best case yet alleging that generative AI is copyright infringement. Thread. ? pic.twitter.com/Zqbv3ekLWt
ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.
Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.
These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.
Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.
With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.
“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”
This year, I watched AI change the world in real time.
From what happened, I have no doubts that the coming years will be the most transformative period in the history of humankind.
Here’s the full timeline of AI in 2023 (January-December):
What to Expect in AI in 2024 — from hai.stanford.edu by Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.
Forty years ago, the release of A Nation at Risk led to what we know today as the modern school reform movement. With its calls for increased academic rigor, more productive use of instructional time, more effective teaching, and more impactful leadership, A Nation at Risk set in motion policy and practice changes at every level of the education system. But after four decades, what has been the result? And where do we go from here?
…
Forty years on, significant challenges remain. The COVID-19 pandemic has had devastating effects on student learning, and chronic absenteeism remains at alarming rates. Even prior to the pandemic, student achievement, as measured by standardized tests, seemed to have plateaued despite ever-increasing resources—in time, dollars, research, technology, and human capital—being devoted to school reform.
Much has been tried in the effort to improve our schools. Has any of it made a difference?
From DSC: Hhhhhmmmmm……not sure yet that this is a good idea. But I doubt there’s any stopping it.
We have formed a new global partnership with @AxelSpringer and its news products.
Real-time information from @politico, @BusinessInsider, European properties @BILD and @welt, and other publications will soon be available to ChatGPT users.
Recent advances in artificial intelligence (AI) have created a step change in how to measure poverty and other human development indicators. Our team has used a type of AI known as a deep convolutional neural network (DCNN) to study satellite imagery and identify some types of poverty with a level of accuracy close to that of household surveys.
E.U. reaches deal on landmark AI bill, racing ahead of U.S. — from washingtonpost.com by Anthony Faiola, Cat Zakrzewski and Beatriz Ríos (behind paywall) The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.
European Union officials reached a landmark deal Friday on the world’s most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.
Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.
Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.
New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.
Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.
“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.
The students, he wrote, found the writing “soulless.”
In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?
To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.
“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”
7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.
The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.
We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.
I’ve written and spoken about this before but the rise of deepfakes is going to have a profound impact on courts throughout the world. This week we saw three major deepfake stories.
Whether you are a lawyer or not, this topic will impact you.So, please consider these questions as we will need to have answers for each one very soon (if not now).
How will we establish a reliable and consistent standard to authenticate digital evidence as genuine and not altered by deepfake technology?
Will the introduction of deepfakes shift the traditional burdens of proof or production, especially when digital evidence is introduced?
Will courts require expert witnesses for digital evidence authentication in every case, and what standards will be used to qualify these experts?
Are there existing technological tools or methods to detect deepfakes? (yes there is but it is not 100%) How can courts keep abreast of rapidly advancing technology?
…plus several more questions
From DSC: What are law schools doing about this? Are they addressing this?
And speaking of legal matters and law schools, this might be interesting or helpful to someone out there:
Take ownership of the file.
When you enter #biglaw, you’ll hear this constantly.
Student Use Cases for AI: Start by Sharing These Guidelines with Your Class — from hbsp.harvard.edu by Ethan Mollick and Lilach Mollick
To help you explore some of the ways students can use this disruptive new technology to improve their learning—while making your job easier and more effective—we’ve written a series of articles that examine the following student use cases:
Earlier this week, CETL and AIG hosted a discussion among UM faculty and other instructors about teaching and AI this fall semester. We wanted to know what was working when it came to policies and assignments that responded to generative AI technologies like ChatGPT, Google Bard, Midjourney, DALL-E, and more. We were also interested in hearing what wasn’t working, as well as questions and concerns that the university community had about teaching and AI.
Then, in class he put them into groups where they worked together to generate a 500-word essay on “Why I Write” entirely through ChatGPT. Each group had complete freedom in how they chose to use the tool. The key: They were asked to evaluate their essay on how well it offered a personal perspective and demonstrated a critical reading of the piece. Weiss also graded each ChatGPT-written essay and included an explanation of why he came up with that particular grade.
After that, the students were asked to record their observations on the experiment on the discussion board. Then they came together again as a class to discuss the experiment.
Weiss shared some of his students’ comments with me (with their approval). Here are a few:
Asked to describe the state of generative AI that they would like to see in higher education 10 years from now, panelists collaboratively constructed their preferred future. .
Julie York, a computer science and media teacher at South Portland High School in Maine, was scouring the internet for discussion tools for her class when she found TeachFX. An AI tool that takes recorded audio from a classroom and turns it into data about who talked and for how long, it seemed like a cool way for York to discuss issues of data privacy, consent and bias with her students. But York soon realized that TeachFX was meant for much more.
York found that TeachFX listened to her very carefully, and generated a detailed feedback report on her specific teaching style. York was hooked, in part because she says her school administration simply doesn’t have the time to observe teachers while tending to several other pressing concerns.
“I rarely ever get feedback on my teaching style. This was giving me 100 percent quantifiable data on how many questions I asked and how often I asked them in a 90-minute class,” York says. “It’s not a rubric. It’s a reflection.”
TeachFX is easy to use, York says. It’s as simple as switching on a recording device.
…
But TeachFX, she adds, is focused not on her students’ achievements, but instead on her performance as a teacher.
ChatGPT Is Landing Kids in the Principal’s Office, Survey Finds — from the74million.org by Mark Keierleber While educators worry that students are using generative AI to cheat, a new report finds students are turning to the tool more for personal problems.
Indeed, 58% of students, and 72% of those in special education, said they’ve used generative AI during the 2022-23 academic year, just not primarily for the reasons that teachers fear most. Among youth who completed the nationally representative survey, just 23% said they used it for academic purposes and 19% said they’ve used the tools to help them write and submit a paper. Instead, 29% reported having used it to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.
Part of the disconnect dividing teachers and students, researchers found, may come down to gray areas. Just 40% of parents said they or their child were given guidance on ways they can use generative AI without running afoul of school rules. Only 24% of teachers say they’ve been trained on how to respond if they suspect a student used generative AI to cheat.
The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.
Thanks to generative AI, such visions are transitioning from fiction to reality.
Video: Unleashing the Power of AI in L&D — from drphilippahardman.substack.com by Dr. Philippa Hardman An exclusive video walkthrough of my keynote at Sweden’s national L&D conference this week
Highlights
The wicked problem of L&D: last year, $371 billion was spent on workplace training globally, but only 12% of employees apply what they learn in the workplace
An innovative approach to L&D: when Mastery Learning is used to design & deliver workplace training, the rate of “transfer” (i.e. behaviour change & application) is 67%
AI 101: quick summary of classification, generative and interactive AI and its uses in L&D
The impact of AI: my initial research shows that AI has the potential to scale Mastery Learning and, in the process:
reduce the “time to training design” by 94% > faster
reduce the cost of training design by 92% > cheaper
increase the quality of learning design & delivery by 96% > better
Research also shows that the vast majority of workplaces are using AI only to “oil the machine” rather than innovate and improve our processes & practices
Practical tips: how to get started on your AI journey in your company, and a glimpse of what L&D roles might look like in a post-AI world
With the advancements in Artificial Intelligence (AI), designers now have access to a wide array of free AI-powered tools that streamline their creative process, enhance productivity, and add a touch of uniqueness to their designs. In this article, we will explore ten such free AI tools websites for graphic designing that have revolutionized the way designers approach their craft.
Generative Art in Motion— from heatherbcooper.substack.com by Heather Cooper Animation and video tools create an explosion of creative expression
Google will soon require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.
AI-generated election ads on YouTube and other Google platforms that alter people or events must include a clear disclaimer located somewhere that users are likely to notice, the company said in an update this week to its political content policy.
For many individuals stepping back into society after incarceration, finding a stable place to call home can be complicated. The reality is that those who have been previously incarcerated are almost 10 times more likely to face homelessness compared to the general public. With over 725,000 people leaving state and federal prisons each year, the quest for housing becomes not only a personal challenge but a broader societal concern. Stable housing is crucial for successful reintegration, providing a foundation for building a new chapter in life. In this article, we’ll shed light on the challenges and offer empowering resources for those on their journey to find housing after prison.
Table of Contents
Understanding the Housing Landscape
Utilizing Support Services
Creating a Housing Plan
Securing and Maintaining Housing
Continuing Personal Growth and Reintegration
Conclusion
From DSC: I’m posting this in the hopes that this information may help someone out there. Also, my dad used to donate some of his time in retirement to an agency that helped people find housing. He mentioned numerous times how important it was for someone to have a safe place to stay that they could call their own.
So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:
What value do we offer to our students?
What value will they need to offer to the world?
How are we preparing them to offer that value?
The answers to these questions are crucial, and they will redefine the trajectory of our education system.
We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.
Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.
As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.
That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.
Midjourney AI Art for Teachers (for any kind of teacher, not just Art Teachers) — from The AI Educator on YouTube by Dan Fitzpatrick
From DSC: This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!
In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.
Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.
Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.
…
Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.
I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.
Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.
AI in Education— from educationnext.org by John Bailey The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
Generative AI and education futures — from ucl.ac.uk Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.
Bringing AI Literacy to High Schools— from by Nikki Goth Itoi Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.
To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.
Excerpts from the Too Long Didn’t Read (TLDR) section fromAIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)
* There was a lot of talk about learning bots.This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).
* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.
* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.
* No professor who spoke thought using a plagiarism detector was a good idea.
* Everyone thought that significant advancements in AI technology were inevitable.
* Almost everyone expressed being overwhelmed by the rate of change.
For Wiese, it was all a big, expensive gamble — and, in one form or another, is one millions of people with criminal records take every year as they pursue education and workforce training on their way to jobs that require a license. Yet that effort might be wasted thanks to the nearly 14,000 laws and regulations that can restrict individuals with arrest and conviction histories from getting licensed in a given field.
July 28 (Reuters) – A week after The University of Michigan Law School banned the use of popular artificial intelligence tools like ChatGPT on student applications, at least one school is going in the other direction.
The Sandra Day O’Connor College of Law at Arizona State University said on Thursday that prospective students are explicitly allowed to use generative artificial intelligence tools to help draft their applications.
Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.
“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”
Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines
Direction 1: From written description to multimodal explanation and application
Direction 2: From literature review alone to referencing lectures
Direction 3: From presentation of ideas to defence of views
Direction 4: From working alone to student-staff partnership
If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.
My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.
Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.
Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.
You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.
Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.
Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.
The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.
That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.
…
Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)
Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…
The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”
The great assessment rethink — from timeshighereducation.com by How to measure learning and protect academic integrity in the age of ChatGPT