July 28 (Reuters) – A week after The University of Michigan Law School banned the use of popular artificial intelligence tools like ChatGPT on student applications, at least one school is going in the other direction.
The Sandra Day O’Connor College of Law at Arizona State University said on Thursday that prospective students are explicitly allowed to use generative artificial intelligence tools to help draft their applications.
Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.
“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”
Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines
Direction 1: From written description to multimodal explanation and application
Direction 2: From literature review alone to referencing lectures
Direction 3: From presentation of ideas to defence of views
Direction 4: From working alone to student-staff partnership
If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.
My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.
Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.
Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.
You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.
Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.
Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.
The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.
That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.
…
Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)
Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…
The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”
The great assessment rethink — from timeshighereducation.com by How to measure learning and protect academic integrity in the age of ChatGPT
Ever since the Supreme Court announced last year that it would rule on two cases involving affirmative action in college admissions, the world of higher education has been anxiously awaiting a decision. Most experts predicted the court would eventually forbid the use of race as a factor in admissions decisions, and colleges and advocates have been scrambling to prepare for that new world.
On Thursday, the Supreme Court met those expectations, ruling that the consideration of race in college admissions is unconstitutional.
The U.S. Supreme Court ruled Thursday that race-conscious admissions practices at Harvard University and the University of North Carolina at Chapel Hill are unconstitutional, shattering decades of legal precedent and upending the recruitment and enrollment landscape for years to come.
The Supreme Court on Thursday held that race-conscious admissions programs at Harvard and the University of North Carolina violate the Constitution’s guarantee of equal protection, a historic ruling that rolls back decades of precedent and will force a dramatic change in how the nation’s private and public universities select their students.
The U.S. Supreme Court on Thursday struck down colleges’ use of race-conscious admissions nationwide, ruling in a pair of closely watched cases that the practice is racially discriminatory.
Writing for the court’s majority, Chief Justice John G. Roberts Jr. said that policies that claim to consider an applicant’s race as one factor among many are in fact violating the equal-protection clause of the 14th Amendment to the U.S. Constitution.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears— from oneusefulthing.org by Ethan Mollick AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI— from chonicle.com by Beth McMurtrie and Beckie Supiano Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it. .
.
How ChatGPT can help disrupt assessment overload— from timeshighereducation.com by David Carless Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning. .
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day.This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
43% of companies already have or plan to adopt AI interviews by 2024
Two-thirds of this group believe AI interviews will increase hiring efficiency
15% say that AI will be used to make decisions on candidates without any human input
More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…
IAALS, the Institute for the Advancement of the American Legal System at the University of Denver, announced today the release of its new report, Allied Legal Professionals: A National Framework for Program Growth. As part of IAALS’ Allied Legal Professionals project—which is generously supported by the Sturm Family Foundation—this report includes multiple research-informed recommendations to help standardize a new tier of legal professionals across states, with the goal of increasing the options for accessible and affordable legal help for the public.
“To hire a lawyer, people either need considerable money or have an income low enough to qualify for the limited legal aid available. The problem is that the majority of people in the middle class don’t fit into either of those categories, making access to legal services incredibly difficult,” says IAALS Director of Special Projects Michael Houlberg. “Even if every lawyer took on pro bono clients, it wouldn’t come close to addressing the need. And IAALS’ research shows that people who need legal help are open to receiving it from qualified and authorized providers who are not lawyers.”
Since 2012, 65 private colleges and universities with enrollment of 500 students or more, that I know of, have reduced their tuition, and commensurately reduced their discount rate. Several more schools are planning price resets for fall 2024. Schools use this strategy to increase the number of students who will consider them, and this approach has been successful for more than 80 percent of the schools which have reduced their published price.
From DSC: What I learned of economics in college would agree with this last bit. As the price goes down, demand goes up. And conversely, as the price goes up, demand goes down. As Lucie points out, many people don’t know about the heavily discounted prices within higher education. I’ve been fighting for price decreases for over 15 years…clearly, I haven’t had much success in that area.
AI-assisted cheating isn’t a temptation if students have a reason to care about their own learning.
Yesterday I happened to listen to two different podcasts that ended up resonating with one another and with an idea that’s been rattling around inside my head with all of this moral uproar about generative AI:
** If we trust students – and earn their trust in return – then they will be far less motivated to cheat with AI or in any other way. **
First, the question of motivation. On the Intentional Teaching podcast, while interviewing James Lang and Michelle Miller on the impact of generative AI, Derek Bruff points out (drawing on Lang’s Cheating Lessons book) that if students have “real motivation to get some meaning out of [an] activity, then there’s far less motivation to just have ChatGPT write it for them.” Real motivation and real meaning FOR THE STUDENT translates into an investment in doing the work themselves.
…
Then I hopped over to one of my favorite podcasts – Teaching in Higher Ed – where Bonni Stachowiak was interviewing Cate Denial about a “pedagogy of kindness,” which is predicated on trusting students and not seeing them as adversaries in the work we’re doing.
So the second key element: being kind and trusting students, which builds a culture of mutual respect and care that again diminishes the likelihood that they will cheat.
…
Again, human-centered learning design seems to address so many of the concerns and challenges of the current moment in higher ed. Maybe it’s time to actually practice it more consistently. #aiineducation #higheredteaching #inclusiveteaching
How liberal arts colleges can make career services a priority — from highereddive.com by John Boyer Creating internships and focusing on short-term experiences has a big impact, the longtime undergraduate dean at the University of Chicago says.
TI-ADDIE: A Trauma-Informed Model of Instructional Design — from er.educause.edu by Ali Carr-Chellman and Treavor Bogard Adjusting the ADDIE model of instructional design specifically to accommodate trauma offers an opportunity to address the collective challenges that designers, instructors, and learners have faced during the current learning moment.
Law school students can now take up to half of their classes online following a recent policy change by the American Bar Association.
ABA’s accrediting body voted last week to raise the ceiling on the number of credits students can earn online for their J.D., up from one-third.
It also struck down a prohibition on first-year law students taking no more than 10 credit hours remotely.
From DSC: It’s almost June of 2023 and matters/impacts of Artificial Intelligence (AI) are increasingly popping up throughout our society. But WOW! Look at this recent piece of news from the American Bar Association: Law school students can now take up to 50% of their credits online! (It used to be just 30%.)
At a time when we need many more lawyers, judges, legislators, politicians, and others to be more informed about emerging technologies — as well as being more tech-savvy themselves — I don’t think the ABA should be patting themselves on the back for this policy change. It’s a step in the right direction, but why it’s not 100% is mind-boggling to me.
The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI). Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities – and pitfalls – of the new technology. In a matter of months, several AI models have entered the market. Mr Altman said a new agency should be formed to license AI companies.
Artificial intelligence was a focus on Capitol Hill Tuesday. Many believe AI could revolutionize, and perhaps upend, considerable aspects of our lives. At a Senate hearing, some said AI could be as momentous as the industrial revolution and others warned it’s akin to developing the atomic bomb. William Brangham discussed that with Gary Marcus, who was one of those who testified before the Senate.
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins. https://t.co/t4syFUj0fLpic.twitter.com/Mw9FMpKq91
Are you ready for the Age of Intelligence? — from linusekenstam.substack.com Linus Ekenstam Let me walk you through my current thoughts on where we are, and where we are going.
From DSC: I post this one to relay the exponential pace of change that Linus also thinks we’ve entered, and to present a knowledgeable person’s perspectives on the future.
Catastrophe / Eucatastrophe — from oneusefulthing.org by Ethan Mollick We have more agency over the future of AI than we think.
Excerpt (emphasis DSC):
Every organizational leader and manager has agency over what they decide to do with AI, just as every teacher and school administrator has agency over how AI will be used in their classrooms. So we need to be having very pragmatic discussions about AI, and we need to have them right now: What do we want our world to look like?
Also relevant/see:
That wasn’t Google I/O — it was Google AI — from technologyreview.com by Mat Honan If you thought generative AI was a big deal last year, wait until you see what it looks like in products already used by billions.
Google is in trouble.
I got early ‘Alpha’ access to GPT-4 with browsing and ran some tests.
Microcredentials Can Make a Huge Difference in Higher Education — from newthinking.com by Shannon Riggs The Ecampus executive director of academic programs and learning innovation at Oregon State University believes that shorter form, low-cost courses can open up colleges to more people.
That so much student loan debt exists is a clear signal that higher education needs to innovate to reduce costs, increase access and improve students’ return on investment. Microcredentials are one way we can do this.
As the Supreme Court weighs President Joe Biden’s student loan forgiveness plan, college tuition keeps climbing.
This year’s incoming freshman class can expect to borrow as much as $37,000 to help cover the cost of a bachelor’s degree, according to a recent report.
College is only getting more expensive. Tuition and fees plus room and board at four-year, in-state public colleges rose more than 2% to $23,250, on average, in the 2022-23 academic year; at four-year private colleges, it increased by more than 3% to $53,430, according to the College Board, which tracks trends in college pricing and student aid.
Many students now borrow to cover the tab, which has already propelled collective student loan debt in the U.S. past $1.7 trillion.
Leaders of colleges and universities face unprecedented challenges today. Tuition has more than doubled over the past two decades as state and federal funding has decreased. Renewed debates about affirmative action and legacy admissions are roiling many campuses and confusing students about what it takes to get accepted. Growing numbers of administrators are matched by declining student enrollment, placing new financial pressures on institutions of higher learning. And many prospective students and their parents are losing faith in the ROI of such an expensive investment and asking the simple question: Is it all worth it? Join distinguished leaders from public and private institutions for this panel discussion on how they are navigating these shifts and how they see the future of higher education.
This year’s lists also offer a hint of how widespread the rankings revolt was. Seventeen medical schools and 62 law schools — nearly a third of the law schools U.S. News ranks — didn’t turn in data to the magazine this year.(It’s not clear what nonparticipation rates have been in the past. Reached by email to request historical context, a spokesperson for U.S. News pointed to webpages that are no longer online. U.S. News ranked law and medical schools that didn’t cooperate this year by using publicly available and past survey data.)
Student loan borrowers who would stand to benefit the most from income-driven repayment plans, or IDRs, are less likely to know about them, according to a new report from left-leaning think tank New America.
Around 2 in 5 student-debt holders earning less than $30,000 a year reported being unfamiliar with the repayment plans. Under a proposed plan from the U.S. Education Department, IDR minimum monthly loan payments for low-income earners, such as this group, could drop to $0.
Just under half of borrowers in default had not heard of IDRs, despite the plans offering a pathway to becoming current on their loans, the report said. Only one-third of currently defaulted borrowers had ever enrolled in IDR.
Across all socioeconomic and racial groups, Americans want an education system that goes beyond college preparation and delivers practical skills for every learner, based on their own needs, goals and vision for the future.
We believe that this can be achieved by making the future of learning more personalized, focused on the needs of individual learners, with success measured by progress and proficiency instead of point-in-time test scores.
Change is hard, but we expect our students to take risks and fail every day. We should ask no less of ourselves.
Like a lot of you, I have been wondering how students are reacting to the rapid launch of generative AI tools. And I wanted to point you to creative ways in which professors and teaching experts have helped involve them in research and policymaking.
At Kalamazoo College, Autumn Hostetter, a psychology professor, and six of her students surveyed faculty members and students to determine whether they could detect an AI-written essay, and what they thought of the ethics of using various AI tools in writing. You can read their research paper here.
…
Next, participants were asked about a range of scenarios, such as using Grammarly, using AI to make an outline for a paper, using AI to write a section of a paper, looking up a concept on Google and copying it directly into a paper, and using AI to write an entire paper. As expected, commonly used tools like Grammarly were considered the most ethical, while writing a paper entirely with AI was considered the least. But researchers found variation in how people approached the in-between scenarios. Perhaps most interesting: Students and faculty members shared very similar views with each scenario.
Pause Giant AI Experiments: An Open Letter — from futureoflife.org We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Elon Musk, Steve Wozniak and dozens of top scientists concerned about the technology moving too fast have signed an open letter asking companies to pull back on artificial intelligence. @trevorlault reports on the new A.I. plea. pic.twitter.com/Vu9QlKfV8C
However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)
In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT— from wired.com by Will Knight (behind paywall) Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.
1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea.
I'm seeing many new applications in education, healthcare, food, … that'll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks.
AI-assisted cheating isn’t a temptation if students have a reason to care about their own learning.
Yesterday I happened to listen to two different podcasts that ended up resonating with one another and with an idea that’s been rattling around inside my head with all of this moral uproar about generative AI:
** If we trust students – and earn their trust in return – then they will be far less motivated to cheat with AI or in any other way. **
First, the question of motivation. On the Intentional Teaching podcast, while interviewing James Lang and Michelle Miller on the impact of generative AI, Derek Bruff points out (drawing on Lang’s Cheating Lessons book) that if students have “real motivation to get some meaning out of [an] activity, then there’s far less motivation to just have ChatGPT write it for them.” Real motivation and real meaning FOR THE STUDENT translates into an investment in doing the work themselves.
…
Then I hopped over to one of my favorite podcasts – Teaching in Higher Ed – where Bonni Stachowiak was interviewing Cate Denial about a “pedagogy of kindness,” which is predicated on trusting students and not seeing them as adversaries in the work we’re doing.
So the second key element: being kind and trusting students, which builds a culture of mutual respect and care that again diminishes the likelihood that they will cheat.
…
Again, human-centered learning design seems to address so many of the concerns and challenges of the current moment in higher ed. Maybe it’s time to actually practice it more consistently. #aiineducation #higheredteaching #inclusiveteaching