From DSC: Hhhhhmmmmm……not sure yet that this is a good idea. But I doubt there’s any stopping it.
We have formed a new global partnership with @AxelSpringer and its news products.
Real-time information from @politico, @BusinessInsider, European properties @BILD and @welt, and other publications will soon be available to ChatGPT users.
Recent advances in artificial intelligence (AI) have created a step change in how to measure poverty and other human development indicators. Our team has used a type of AI known as a deep convolutional neural network (DCNN) to study satellite imagery and identify some types of poverty with a level of accuracy close to that of household surveys.
E.U. reaches deal on landmark AI bill, racing ahead of U.S. — from washingtonpost.com by Anthony Faiola, Cat Zakrzewski and Beatriz Ríos (behind paywall) The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.
European Union officials reached a landmark deal Friday on the world’s most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.
Technology is all about solving big thorny problems. Yet one of the hardest things about solving hard problems is knowing where to focus our efforts. There are so many urgent issues facing the world. Where should we even begin? So we asked dozens of people to identify what problem at the intersection of technology and society that they think we should focus more of our energy on. We queried scientists, journalists, politicians, entrepreneurs, activists, and CEOs.
Some broad themes emerged: the climate crisis, global health, creating a just and equitable society, and AI all came up frequently. There were plenty of outliers, too, ranging from regulating social media to fighting corruption.
New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.
Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.
“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.
The students, he wrote, found the writing “soulless.”
In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?
To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.
“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”
7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.
The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.
We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.
I’ve written and spoken about this before but the rise of deepfakes is going to have a profound impact on courts throughout the world. This week we saw three major deepfake stories.
Whether you are a lawyer or not, this topic will impact you.So, please consider these questions as we will need to have answers for each one very soon (if not now).
How will we establish a reliable and consistent standard to authenticate digital evidence as genuine and not altered by deepfake technology?
Will the introduction of deepfakes shift the traditional burdens of proof or production, especially when digital evidence is introduced?
Will courts require expert witnesses for digital evidence authentication in every case, and what standards will be used to qualify these experts?
Are there existing technological tools or methods to detect deepfakes? (yes there is but it is not 100%) How can courts keep abreast of rapidly advancing technology?
…plus several more questions
From DSC: What are law schools doing about this? Are they addressing this?
And speaking of legal matters and law schools, this might be interesting or helpful to someone out there:
Take ownership of the file.
When you enter #biglaw, you’ll hear this constantly.
Student Use Cases for AI: Start by Sharing These Guidelines with Your Class — from hbsp.harvard.edu by Ethan Mollick and Lilach Mollick
To help you explore some of the ways students can use this disruptive new technology to improve their learning—while making your job easier and more effective—we’ve written a series of articles that examine the following student use cases:
Earlier this week, CETL and AIG hosted a discussion among UM faculty and other instructors about teaching and AI this fall semester. We wanted to know what was working when it came to policies and assignments that responded to generative AI technologies like ChatGPT, Google Bard, Midjourney, DALL-E, and more. We were also interested in hearing what wasn’t working, as well as questions and concerns that the university community had about teaching and AI.
Then, in class he put them into groups where they worked together to generate a 500-word essay on “Why I Write” entirely through ChatGPT. Each group had complete freedom in how they chose to use the tool. The key: They were asked to evaluate their essay on how well it offered a personal perspective and demonstrated a critical reading of the piece. Weiss also graded each ChatGPT-written essay and included an explanation of why he came up with that particular grade.
After that, the students were asked to record their observations on the experiment on the discussion board. Then they came together again as a class to discuss the experiment.
Weiss shared some of his students’ comments with me (with their approval). Here are a few:
Asked to describe the state of generative AI that they would like to see in higher education 10 years from now, panelists collaboratively constructed their preferred future. .
Julie York, a computer science and media teacher at South Portland High School in Maine, was scouring the internet for discussion tools for her class when she found TeachFX. An AI tool that takes recorded audio from a classroom and turns it into data about who talked and for how long, it seemed like a cool way for York to discuss issues of data privacy, consent and bias with her students. But York soon realized that TeachFX was meant for much more.
York found that TeachFX listened to her very carefully, and generated a detailed feedback report on her specific teaching style. York was hooked, in part because she says her school administration simply doesn’t have the time to observe teachers while tending to several other pressing concerns.
“I rarely ever get feedback on my teaching style. This was giving me 100 percent quantifiable data on how many questions I asked and how often I asked them in a 90-minute class,” York says. “It’s not a rubric. It’s a reflection.”
TeachFX is easy to use, York says. It’s as simple as switching on a recording device.
…
But TeachFX, she adds, is focused not on her students’ achievements, but instead on her performance as a teacher.
ChatGPT Is Landing Kids in the Principal’s Office, Survey Finds — from the74million.org by Mark Keierleber While educators worry that students are using generative AI to cheat, a new report finds students are turning to the tool more for personal problems.
Indeed, 58% of students, and 72% of those in special education, said they’ve used generative AI during the 2022-23 academic year, just not primarily for the reasons that teachers fear most. Among youth who completed the nationally representative survey, just 23% said they used it for academic purposes and 19% said they’ve used the tools to help them write and submit a paper. Instead, 29% reported having used it to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.
Part of the disconnect dividing teachers and students, researchers found, may come down to gray areas. Just 40% of parents said they or their child were given guidance on ways they can use generative AI without running afoul of school rules. Only 24% of teachers say they’ve been trained on how to respond if they suspect a student used generative AI to cheat.
The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.
Thanks to generative AI, such visions are transitioning from fiction to reality.
Video: Unleashing the Power of AI in L&D — from drphilippahardman.substack.com by Dr. Philippa Hardman An exclusive video walkthrough of my keynote at Sweden’s national L&D conference this week
Highlights
The wicked problem of L&D: last year, $371 billion was spent on workplace training globally, but only 12% of employees apply what they learn in the workplace
An innovative approach to L&D: when Mastery Learning is used to design & deliver workplace training, the rate of “transfer” (i.e. behaviour change & application) is 67%
AI 101: quick summary of classification, generative and interactive AI and its uses in L&D
The impact of AI: my initial research shows that AI has the potential to scale Mastery Learning and, in the process:
reduce the “time to training design” by 94% > faster
reduce the cost of training design by 92% > cheaper
increase the quality of learning design & delivery by 96% > better
Research also shows that the vast majority of workplaces are using AI only to “oil the machine” rather than innovate and improve our processes & practices
Practical tips: how to get started on your AI journey in your company, and a glimpse of what L&D roles might look like in a post-AI world
With the advancements in Artificial Intelligence (AI), designers now have access to a wide array of free AI-powered tools that streamline their creative process, enhance productivity, and add a touch of uniqueness to their designs. In this article, we will explore ten such free AI tools websites for graphic designing that have revolutionized the way designers approach their craft.
Generative Art in Motion— from heatherbcooper.substack.com by Heather Cooper Animation and video tools create an explosion of creative expression
Google will soon require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.
AI-generated election ads on YouTube and other Google platforms that alter people or events must include a clear disclaimer located somewhere that users are likely to notice, the company said in an update this week to its political content policy.
For many individuals stepping back into society after incarceration, finding a stable place to call home can be complicated. The reality is that those who have been previously incarcerated are almost 10 times more likely to face homelessness compared to the general public. With over 725,000 people leaving state and federal prisons each year, the quest for housing becomes not only a personal challenge but a broader societal concern. Stable housing is crucial for successful reintegration, providing a foundation for building a new chapter in life. In this article, we’ll shed light on the challenges and offer empowering resources for those on their journey to find housing after prison.
Table of Contents
Understanding the Housing Landscape
Utilizing Support Services
Creating a Housing Plan
Securing and Maintaining Housing
Continuing Personal Growth and Reintegration
Conclusion
From DSC: I’m posting this in the hopes that this information may help someone out there. Also, my dad used to donate some of his time in retirement to an agency that helped people find housing. He mentioned numerous times how important it was for someone to have a safe place to stay that they could call their own.
So, as educators, mentors, and guides to our future generations, we must ask ourselves three pivotal questions:
What value do we offer to our students?
What value will they need to offer to the world?
How are we preparing them to offer that value?
The answers to these questions are crucial, and they will redefine the trajectory of our education system.
We need to create an environment that encourages curiosity, embraces failure as a learning opportunity, and celebrates diversity. We need to teach our students how to learn, how to ask the right questions, and how to think for themselves.
Leveraging ChatGPT for learning is the most meaningful skill this year for lifelong learners. But it’s too hard to find resources to master it.
As a learning science nerd, I’ve explored hundreds of prompts over the past months. Most of the advice doesn’t go beyond text summaries and multiple-choice testing.
That’s why I’ve created this article — it merges learning science with prompt writing to help you learn anything faster.
Midjourney AI Art for Teachers (for any kind of teacher, not just Art Teachers) — from The AI Educator on YouTube by Dan Fitzpatrick
From DSC: This is a very nice, clearly illustrated, free video to get started with the Midjourney (text-to-image) app. Nice work Dan!
In the new-normal of generative AI, how does one articulate the value of academic integrity? This blog presents my current response in about 2,500 words; a complete answer could fill a sizable book.
Massive amounts of misinformation are disseminated about generative AI, so the first part of my discussion clarifies what large language models (Chat-GPT and its counterparts) can currently do and what they cannot accomplish at this point in time. The second part describes ways in which generative AI can be misused as a means of learning; unfortunately, many people are now advocating for these mistaken applications to education. The third part describes ways in which large language models (LLM), used well, may substantially improve learning and education. I close with a plea for a robust, informed public discussion about these topics and issues.
Many of the more than a dozen teachers TIME interviewed for this story argue that the way to get kids to care is to proactively use ChatGPT in the classroom.
…
Some of those creative ideas are already in effect at Peninsula High School in Gig Harbor, about an hour from Seattle. In Erin Rossing’s precalculus class, a student got ChatGPT to generate a rap about vectors and trigonometry in the style of Kanye West, while geometry students used the program to write mathematical proofs in the style of raps, which they performed in a classroom competition. In Kara Beloate’s English-Language Arts class, she allowed students reading Shakespeare’s Othello to use ChatGPT to translate lines into modern English to help them understand the text, so that they could spend class time discussing the plot and themes.
I found that other developed countries share concerns about students cheating but are moving quickly to use AI to personalize education, enhance language lessons and help teachers with mundane tasks, such as grading. Some of these countries are in the early stages of training teachers to use AI and developing curriculum standards for what students should know and be able to do with the technology.
Several countries began positioning themselves several years ago to invest in AI in education in order to compete in the fourth industrial revolution.
AI in Education— from educationnext.org by John Bailey The leap into a new era of machine intelligence carries risks and challenges, but also plenty of promise
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
Generative AI and education futures — from ucl.ac.uk Video highlights from Professor Mike Sharples’ keynote address at the 2023 UCL Education Conference, which explored opportunities to prosper with AI as a part of education.
Bringing AI Literacy to High Schools— from by Nikki Goth Itoi Stanford education researchers collaborated with teachers to develop classroom-ready AI resources for high school instructors across subject areas.
To address these two imperatives, all high schools need access to basic AI tools and training. Yet the reality is that many underserved schools in low-income areas lack the bandwidth, skills, and confidence to guide their students through an AI-powered world. And if the pattern continues, AI will only worsen existing inequities. With this concern top of mind plus initial funding from the McCoy Ethics Center, Lee began recruiting some graduate students and high school teachers to explore how to give more people equal footing in the AI space.
Excerpts from the Too Long Didn’t Read (TLDR) section fromAIxEducation Day 1: My Takeaways — from stefanbauschard.substack.com by Stefan Bauschard (emphasis DSC)
* There was a lot of talk about learning bots.This talk included the benefits of 1:1 tutoring, access to education for those who don’t currently have it (developing world), the ability to do things for which we currently don’t have enough teachers and support staff (speech pathology), individualized instruction (it will be good at this soon), and stuff that it is already good at (24/7 availability, language tutoring, immediate feedback regarding argumentation and genre (not facts :), putting students on the right track, comprehensive feedback, more critical feedback).
* Students are united. The student organizers and those who spoke at the conference have concerns about future employment, want to learn to use generative AI, and express concern about being prepared for the “real world.” They also all want a say in how generative AI is used in the college classroom. Many professors spoke about the importance of having conversations with students and involving them in the creation of AI policies as well.
* I think it’s fair to say that all professors who spoke thought students were going to use generative AI regardless of whether or not it was permitted, though some hoped for honesty.
* No professor who spoke thought using a plagiarism detector was a good idea.
* Everyone thought that significant advancements in AI technology were inevitable.
* Almost everyone expressed being overwhelmed by the rate of change.
For Wiese, it was all a big, expensive gamble — and, in one form or another, is one millions of people with criminal records take every year as they pursue education and workforce training on their way to jobs that require a license. Yet that effort might be wasted thanks to the nearly 14,000 laws and regulations that can restrict individuals with arrest and conviction histories from getting licensed in a given field.
July 28 (Reuters) – A week after The University of Michigan Law School banned the use of popular artificial intelligence tools like ChatGPT on student applications, at least one school is going in the other direction.
The Sandra Day O’Connor College of Law at Arizona State University said on Thursday that prospective students are explicitly allowed to use generative artificial intelligence tools to help draft their applications.
Are we on the frontier of unveiling an unseen revolution in education? The hypothesis is that this quiet upheaval’s importance is far more significant than we imagine. As our world adjusts, restructures, and emerges from a year which launched an era of mass AI, so too does a new academic year dawn for many – with hope and enthusiasm about new roles, titles, or simply just a new mindset. Concealed from sight, however, I believe a significant transformative wave has started and will begin to reshape our education systems and push us into a new stage of innovative teaching practice whether we desire it or not. The risk and hope is that the quiet revolution remains outside the regulator’s and ministries’ purview, which could risk a dangerous fragmentation of education policy and practice, divorced from the actualities of the world ‘in and outside school’.
“This goal can be achieved through continued support for introducing more new areas of study, such as ‘foresight and futures’, in the high school classroom.”
Four directions for assessment redesign in the age of generative AI— from timeshighereducation.com by Julia Chen The rise of generative AI has led universities to rethink how learning is quantified. Julia Chen offers four options for assessment redesign that can be applied across disciplines
Direction 1: From written description to multimodal explanation and application
Direction 2: From literature review alone to referencing lectures
Direction 3: From presentation of ideas to defence of views
Direction 4: From working alone to student-staff partnership
If you are just back from vacation and still not quite sure what to do about AI, let me assure you that you are not the only one. My advice for you today is this: fill your LinkedIn-feed and/or inbox with ideas, inspirational writing and commentary on AI. This will get you up to speed quickly and is a great way to stay informed on the newest movements you need to be aware of.
My personal recommendation for you is to check out these bright people who are all very active on LinkedIn and/or have a newsletter worth paying attention to. I have kept the list fairly short – only 15 people – in order to make it as easy as possible for you to begin exploring.
Understanding the nature of generative AI is crucial for educators to navigate the evolving landscape of teaching and learning. In a new report from the Next Level Lab, Lydia Cao and Chris Dede reflect on the role of generative AI in learning and how this pushes us to reconceptualize our visions of effective education. Though there are concerns of plagiarism and replacement of human jobs, Cao and Dede argue that a more productive way forward is for educators to focus on demystifying AI, emphasizing the learning process over the final product, honoring learner agency, orchestrating multiple sources of motivation, cultivating skills that AI cannot easily replicate, and fostering intelligence augmentation (IA) through building human-AI partnerships.
Have you used chatbots to save time this school year? ChatGPT and generative artificial intelligence (AI) have changed the way I think about instructional planning. Today on the blog, I have a selection of ChatGPT prompts for ELA teachers.
You can use chatbots to tackle tedious tasks, gather ideas, and even support your work to meet the needs of every student. In my recent quick reference guide published by ISTE and ASCD, Using AI Chatbots to Enhance Planning and Instruction, I explore this topic. You can also find 50 more prompts for educators in this free ebook.
Professors Craft Courses on ChatGPT With ChatGPT — from insidehighered.com by Lauren Coffey While some institutions are banning the use of the new AI tool, others are leaning into its use and offering courses dedicated solely to navigating the new technology.
Maynard, along with Jules White at Vanderbilt University, are among a small number of professors launching courses focused solely on teaching students across disciplines to better navigate AI and ChatGPT.
The offerings go beyond institutions flexing their innovation skills—the faculty behind these courses view them as imperative to ensure students are prepared for ever-changing workforce needs.
That’s a solid report card for a freshman in college, a respectable 3.57 GPA. I recently finished my freshman year at Harvard, but those grades aren’t mine — they’re GPT-4’s.
…
Three weeks ago, I asked seven Harvard professors and teaching assistants to grade essays written by GPT-4 in response to a prompt assigned in their class. Most of these essays were major assignments which counted for about one-quarter to one-third of students’ grades in the class. (I’ve listed the professors or preceptors for all of these classes, but some of the essays were graded by TAs.)
Here are the prompts with links to the essays, the names of instructors, and the grades each essay received…
The impact that AI is having on liberal-arts homework is indicative of the AI threat to the career fields that liberal-arts majors tend to enter. So maybe what we should really be focused on isn’t, “How do we make liberal-arts homework better?” but rather, “What are jobs going to look like over the next 10–20 years, and how do we prepare students to succeed in that world?”
The great assessment rethink — from timeshighereducation.com by How to measure learning and protect academic integrity in the age of ChatGPT
Ever since the Supreme Court announced last year that it would rule on two cases involving affirmative action in college admissions, the world of higher education has been anxiously awaiting a decision. Most experts predicted the court would eventually forbid the use of race as a factor in admissions decisions, and colleges and advocates have been scrambling to prepare for that new world.
On Thursday, the Supreme Court met those expectations, ruling that the consideration of race in college admissions is unconstitutional.
The U.S. Supreme Court ruled Thursday that race-conscious admissions practices at Harvard University and the University of North Carolina at Chapel Hill are unconstitutional, shattering decades of legal precedent and upending the recruitment and enrollment landscape for years to come.
The Supreme Court on Thursday held that race-conscious admissions programs at Harvard and the University of North Carolina violate the Constitution’s guarantee of equal protection, a historic ruling that rolls back decades of precedent and will force a dramatic change in how the nation’s private and public universities select their students.
The U.S. Supreme Court on Thursday struck down colleges’ use of race-conscious admissions nationwide, ruling in a pair of closely watched cases that the practice is racially discriminatory.
Writing for the court’s majority, Chief Justice John G. Roberts Jr. said that policies that claim to consider an applicant’s race as one factor among many are in fact violating the equal-protection clause of the 14th Amendment to the U.S. Constitution.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears— from oneusefulthing.org by Ethan Mollick AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI— from chonicle.com by Beth McMurtrie and Beckie Supiano Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it. .
.
How ChatGPT can help disrupt assessment overload— from timeshighereducation.com by David Carless Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning. .
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day.This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
43% of companies already have or plan to adopt AI interviews by 2024
Two-thirds of this group believe AI interviews will increase hiring efficiency
15% say that AI will be used to make decisions on candidates without any human input
More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…