AWS, Educause partner on generative AI readiness tool — from edscoop.com by Skylar Rispens Amazon Web Services and the nonprofit Educause announced a new tool designed to help higher education institutions gauge their readiness to adopt generative artificial intelligence.
Amazon Web Services and the nonprofit Educause on Monday announced they’ve teamed up to develop a tool that assesses how ready higher education institutions are to adopt generative artificial intelligence.
Through a series of curated questions about institutional strategy, governance, capacity and expertise, AWS and Educause claim their assessment can point to ways that operations can be improved before generative AI is adopted to support students and staff.
“Generative AI will transform how educators engage students inside and outside the classroom, with personalized education and accessible experiences that provide increased student support and drive better learning outcomes,” Kim Majerus, vice president of global education and U.S. state and local government at AWS, said in a press release. “This assessment is a practical tool to help colleges and universities prepare their institutions to maximize this technology and support students throughout their higher ed journey.”
Speaking of AI and our learning ecosystems, also see:
At a moment when the value of higher education has come under increasing scrutiny, institutions around the world can be exactly what learners and employers both need. To meet the needs of a rapidly changing job market and equip learners with the technical and ethical direction needed to thrive, institutions should familiarize students with the use of AI and nurture the innately human skills needed to apply it ethically. Failing to do so can create enormous risk for higher education, business and society.
What is AI literacy?
To effectively utilize generative AI, learners will need to grasp the appropriate use cases for these tools, understand when their use presents significant downside risk, and learn to recognize abuse to separate fact from fiction. AI literacy is a deeply human capacity. The critical thinking and communication skills required are muscles that need repeated training to be developed and maintained.
Assessment of Student Learning Is Broken — from insidehighered.com by Zach Justus and Nik Janos And generative AI is the thing that broke it, Zach Justus and Nik Janos write.
Generative artificial intelligence (AI) has broken higher education assessment. This has implications from the classroom to institutional accreditation. We are advocating for a one-year pause on assessment requirements from institutions and accreditation bodies.
… Implications and Options
The data we are collecting right now are literally worthless. These same trends implicate all data gathered from December 2022 through the present. So, for instance, if you are conducting a five-year program review for institutional accreditation you should separate the data from before the fall 2022 term and evaluate it independently. Whether you are evaluating writing, STEM outputs, coding, or anything else, you are now looking at some combination of student/AI work. This will get even more confounding as AI tools become more powerful and are integrated into our existing production platforms like Microsoft Office and Google Workspace.
The burden of adapting to artificial intelligence has fallen to faculty, but we are not positioned or equipped to lead these conversations across stakeholder groups.
What about course videos? Professors can create them (by lecturing into a camera for several hours hopefully in different clothes) from the readings, from their interpretations of the readings, from their own case experiences – from anything they like. But now professors can direct the creation of the videos by talking – actually describing – to a CustomGPTabout what they’d like the video to communicate with their or another image. Wait. What?They can make a video by talking to a CustomGPT and even select the image they want the “actor” to use? Yes. They can also add a British accent and insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This means that a professor can specify how long the video should be, what sources should be consulted and describe the demeanor the professor wants the video to project.
From DSC: Though I wasn’t crazy about the clickbait type of title here, I still thought that the article was solid and thought-provoking. It contained several good ideas for using AI.
Excerpt from a recent EdSurge Higher Ed newsletter:
There are darker metaphors though — ones that focus on the hazards for humanity of the tech. Some professors worry that AI bots are simply replacing hired essay-writers for many students, doing work for a student that they can then pass off as their own (and doing it for free).
From DSC: Hmmm…the use of essay writers was around long before AI became mainstream within higher education. So we already had a serious problem where students didn’t see the why in what they were being asked to do. Some students still aren’t sold on the why of the work in the first place. The situation seems to involve ethics, yes, but it also seems to say that we haven’t sold students on the benefits of putting in the work. Students seem to be saying I don’t care about this stuff…I just need the degree so I can exit stage left.
My main point: The issue didn’t start with AI…it started long before that.
This financial stagnation is occurring as we face a multitude of escalating challenges. These challenges include but are in no way limited to, chronic absenteeism, widespread student mental health issues, critical staff shortages, rampant classroom behavior issues, a palpable sense of apathy for education in students, and even, I dare say, hatred towards education among parents and policymakers.
…
Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.
What is Ed? An easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement. It offers personalized guidance and resources to students and families 24/7 in over 100 languages.
Also relevant/see:
Los Angeles Unified Bets Big on ‘Ed,’ an AI Tool for Students — from by Lauraine Langreo
The Los Angeles Unified School District has launched an AI-powered learning tool that will serve as a “personal assistant” to students and their parents.The tool, named “Ed,” can provide students from the nation’s second-largest district information about their grades, attendance, upcoming tests, and suggested resources to help them improve their academic skills on their own time, Superintendent Alberto Carvalho announced March 20. Students can also use the app to find social-emotional-learning resources, see what’s for lunch, and determine when their bus will arrive.
Could OpenAI’s Sora be a big deal for elementary school kids?— from futureofbeinghuman.com by Andrew Maynard Despite all the challenges it comes with, AI-generated video could unleash the creativity of young children and provide insights into their inner worlds – if it’s developed and used responsibly
Like many others, I’m concerned about the challenges that come with hyper-realistic AI-generated video. From deep fakes and disinformation to blurring the lines between fact and fiction, generative AI video is calling into question what we can trust, and what we cannot.
And yet despite all the issues the technology is raising, it also holds quite incredible potential, including as a learning and development tool — as long as we develop and use it responsibly.
I was reminded of this a few days back while watching the latest videos from OpenAI created by their AI video engine Sora — including the one below generated from the prompt “an elephant made of leaves running in the jungle”
…
What struck me while watching this — perhaps more than any of the other videos OpenAI has been posting on its TikTok channel — is the potential Sora has for translating the incredibly creative but often hard to articulate ideas someone may have in their head, into something others can experience.
Can AI Aid the Early Education Workforce? — from edsurge.com by Emily Tate Sullivan During a panel at SXSW EDU 2024, early education leaders discussed the potential of AI to support and empower the adults who help our nation’s youngest children.
While the vast majority of the conversations about AI in education have centered on K-12 and higher education, few have considered the potential of this innovation in early care and education settings.
At the conference, a panel of early education leaders gathered to do just that, in a session exploring the potential of AI to support and empower the adults who help our nation’s youngest children, titled, “ChatECE: How AI Could Aid the Early Educator Workforce.”
Hau shared that K-12 educators are using the technology to improve efficiency in a number of ways, including to draft individualized education programs (IEPs), create templates for communicating with parents and administrators, and in some cases, to support building lesson plans.
Educators are, perhaps rightfully so, cautious about incorporating AI in their classrooms. With thoughtful implementation, however, AI image generators, with their ability to use any language, can provide powerful ways for students to engage with the target language and increase their proficiency.
While AI offers numerous benefits, it’s crucial to remember that it is a tool to empower educators, not replace them. The human connection between teacher and student remains central to fostering creativity, critical thinking, and social-emotional development. The role of teachers will shift towards becoming facilitators, curators, and mentors who guide students through personalized learning journeys. By harnessing the power of AI, educators can create dynamic and effective classrooms that cater to each student’s individual needs. This paves the way for a more engaging and enriching learning experience that empowers students to thrive.
In this article, seven teachers across the world share their insights on AI tools for educators. You will hear a host of varied opinions and perspectives on everything from whether AI could hasten the decline of learning foreign languages to whether AI-generated lesson plans are an infringement on teachers’ rights. A common theme emerged from those we spoke with: just as the internet changed education, AI tools are here to stay, and it is prudent for teachers to adapt.
Even though it’s been more than a year since ChatGPT made a big splash in the K-12 world, many teachers say they are still not receiving any training on using artificial intelligence tools in the classroom.
More than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.
From DSC: This article mentioned the following resource:
What if, for example, the corporate learning system knew who you were and you could simply ask it a question and it would generate an answer, a series of resources, and a dynamic set of learning objects for you to consume? In some cases you’ll take the answer and run. In other cases you’ll pour through the content. And in other cases you’ll browse through the course and take the time to learn what you need.
And suppose all this happened in a totally personalized way. So you didn’t see a “standard course” but a special course based on your level of existing knowledge?
This is what AI is going to bring us. And yes, it’s already happening today.
One problem, of course, is that it’s prohibitively expensive to hire a tutor for every average or struggling student, or even one for every two or three of them. This was the two-sigma “problem” that Bloom alluded to in the title of his essay: how can the massive benefits of tutoring possibly be scaled up? Both Khan and Zuckerberg have argued that the answer is to have computers, maybe powered by artificial intelligence, serve as tutors instead of humans.
From DSC: I’m hoping that AI-backed learning platforms WILL help many people of all ages and backgrounds. But I realize — and appreciate what Natalie is saying here as well — that human beings are needed in the learning process (especially at younger ages).
But without the human element, that’s unlikely to be enough. Students are more likely to work hard to please a teacher than to please a computer.
Natalie goes on to talk about training all teachers in cognitive science — a solid idea for sure. That’s what I was trying to get at with this graphic: .
.
But I’m not as hopeful in all teachers getting trained in cognitive science…as it should have happened (in the Schools of Education and in the K12 learning ecosystem at large) by now. Perhaps it will happen, given enough time.
And with more homeschooling and blended programs of education occurring, that idea gets stretched even further.
Parents are looking for a different kind of education for their children.A 2024 poll of parents reveals that 72% are considering, 63% are searching for, and 44% have selected a new K-12 school option for their children over the past few years. So, what type of education are they seeking?
Additional polling data reveals that 49% of parents would prefer their child learn from home at least one day a week.While 10% want full-time homeschooling, the remaining 39% of parents desire their child to learn at home one to four days a week, with the remaining days attending school on-campus. Another parent poll released this month indicates that an astonishing 64% of parents indicated that if they were looking for a new school for their child, they would enroll him or her in a hybrid school.
For over a year, GPT-4 was the dominant AI model, clearly much smarter than any of the other LLM systems available. That situation has changed in the last month, there are now three GPT-4 class models, all powering their own chatbots: GPT-4 (accessible through ChatGPT Plus or Microsoft’s CoPilot), Anthropic’s Claude 3 Opus, and Google’s Gemini Advanced1.
… Where we stand
We are in a brief period in the AI era where there are now multiple leading models, but none has yet definitively beaten the GPT-4 benchmark set over a year ago. While this may represent a plateau in AI abilities, I believe this is likely to change in the coming months as, at some point, models like GPT-5 and Gemini 2.0 will be released. In the meantime, you should be using a GPT-4 class model and using it often enough to learn what it does well. You can’t go wrong with any of them, pick a favorite and use it…
From DSC: Here’s a powerful quote from Ethan:
In fact, in my new book I postulate that you haven’t really experienced AI until you have had three sleepless nights of existential anxiety, after which you can start to be productive again.
For us, I think the biggest promise of AI tools like Sora — that can create video with ease — is that they lower the cost of immersive educational experiences. This increases the availability of these experiences, expanding their reach to student populations who wouldn’t otherwise have them, whether due to time, distance, or expense.
Consider the profound impact on a history class, where students are transported to California during the gold rush through hyperrealistic video sequences. This vivifies the historical content and cultivates a deeper connection with the material.
In fact, OpenAI has already demonstrated the promise of this sort of use case, with a very simple prompt producing impressive results…
Take this scenario. A student misses a class and, within twenty minutes, receives a series of texts and even a voicemail from a very concerned and empathic-sounding voice wanting to know what’s going on. Of course, the text is entirely generated, and the voice is synthetic as well, but the student likely doesn’t know this. To them, communication isn’t something as easy to miss or brush off as an email. It sounds like someone who cares is talking to them.
But let’s say that isn’t enough. By that evening, the student still hadn’t logged into their email or checked the LMS. The AI’s strategic reasoning is communicating with the predictive AI and analyzing the pattern of behavior against students who succeed or fail vs. students who are ill. The AI tracks the student’s movements on campus, monitors their social media usage, and deduces the student isn’t ill and is blowing off class.
The AI agent resumes communication with the student. But this time, the strategic AI adopts a different persona, not the kind and empathetic persona used for the initial contact, but a stern, matter-of-fact one. The student’s phone buzzes with alerts that talk about scholarships being lost, teachers being notified, etc. The AI anticipates the excuses the student will use and presents evidence tracking the student’s behavior to show they are not sick.
Not so much focused on learning ecosystems, but still worth mentioning:
NVIDIA Digital Human Technologies Bring AI Characters to Life
Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications
Today is the beginning of our moonshot to solve embodied AGI in the physical world. I’m so excited to announce Project GR00T, our new initiative to create a general-purpose foundation model for humanoid robot learning.
These librarians, entrepreneurs, lawyers and technologists built the world where artificial intelligence threatens to upend life and law as we know it – and are now at the forefront of the battles raging within.
… To create this first-of-its-kind guide, we cast a wide net with dozens of leaders in this area, took submissions, consulted with some of the most esteemed gurus in legal tech. We also researched the cases most likely to have the biggest impact on AI, unearthing the dozen or so top trial lawyers tapped to lead the battles. Many of them bring copyright or IP backgrounds and more than a few are Bay Area based. Those denoted with an asterisk are members of our Hall of Fame. .
descrybe.ai, a year-old legal research startup focused on using artificial intelligence to provide free and easy access to court opinions, has completed its goal of creating AI-generated summaries of all available state supreme and appellate court opinions from throughout the United States.
descrybe.ai describes its mission as democratizing access to legal information and leveling the playing field in legal research, particularly for smaller-firm lawyers, journalists, and members of the public.
As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme. … 2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch. As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.
With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.
Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.
Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.
The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?
…
The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.
In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.
A new study from the University of Tokyo has highlighted the positive effect that immersive virtual reality experiences have for depression anti-stigma and knowledge interventions compared to traditional video.
…
The study found that depression knowledge improved for both interventions, however, only the immersive VR intervention reduced stigma. The VR-powered intervention saw depression knowledge score positively associated with a neural response in the brain that is indicative of empathetic concern. The traditional video intervention saw the inverse, with participants demonstrating a brain-response which suggests a distress-related response.
From DSC: This study makes me wonder why we haven’t heard of more VR-based uses in diversity training. I’m surprised we haven’t heard of situations where we are put in someone else’s mocassins so to speak. We could have a lot more empathy for someone — and better understand their situation — if we were to experience life as others might experience it. In the process, we would likely uncover some hidden biases that we have.
Last month, I discussed a GPT that I had created around enhancing prompts. Since then, I have been actively using my Prompt Enhancer GPT to much more effective outputs. Last week, I did a series of mini-talks on generative AI in different parts of higher education (faculty development, human resources, grants, executive leadership, etc) and structured it as “5 tips”. I included a final bonus tip in all of them—a tip that I heard from many afterwards was probably the most useful tip—especially because you can only access the Prompt Enhancer GPT if you are paying for ChatGPT.
Effectively integrating generative AI into higher education requires policy development, cross-functional engagement, ethical principles, risk assessments, collaboration with other institutions, and an exploration of diverse use cases.
Creating Guidelines for the Use of Gen AI Across Campus— from campustechnology.com by Rhea Kelly The University of Kentucky has taken a transdisciplinary approach to developing guidelines and recommendations around generative AI, incorporating input from stakeholders across all areas of the institution. Here, the director of UK’s Center for the Enhancement of Learning and Teaching breaks down the structure and thinking behind that process.
That resulted in a set of instructional guidelines that we released in August of 2023 and updated in December of 2023. We’re also looking at guidelines for researchers at UK, and we’re currently in the process of working with our colleagues in the healthcare enterprise, UK Healthcare, to comb through the additional complexities of this technology in clinical care and to offer guidance and recommendations around those issues.
My experiences match with the results of the above studies. The second study cited above found that 83% of those students who haven’t used AI tools are “not interested in using them,” so it is no surprise that many students have little awareness of their nature. The third study cited above found that, “apart from 12% of students identifying as daily users,” most students’ use cases were “relatively unsophisticated” like summarizing or paraphrasing text.
For those of us in the AI-curious bubble, we need to continually work to stay current, but we also need to recognize that what we take to be “common knowledge” is far from common outside of the bubble.
Despite general familiarity, however, technical knowledge shouldn’t be assumed for district leaders or others in the school community. For instance, it’s critical that any materials related to AI not be written in “techy talk” so they can be clearly understood, said Ann McMullan, project director for the Consortium for School Networking’s EmpowerED Superintendents Initiative.
To that end, CoSN, a nonprofit that promotes technological innovation in K-12, has released an array of AI resources to help superintendents stay ahead of the curve, including a one-page explainer that details definitions and guidelines to keep in mind as schools work with the emerging technology.
“The world is adapting to seismic shifts from generative AI,” says Luben Pampoulov, Partner at GSV Ventures. “AI co-pilots, AI tutors, AI content generators—AI is ubiquitous, and differentiation is increasingly critical. This is an impressive group of EdTech companies that are leveraging AI and driving positive outcomes for learners and society.”
Workforce Learning comprises 34% of the list, K-12 29%, Higher Education 24%, Adult Consumer Learning 10%, and Early Childhood 3%. Additionally, 21% of the companies stretch across two or more “Pre-K to Gray” categories. A broader move towards profitability is also evident: the collective gross and EBITDA margin score of the 2024 cohort increased 5% compared to 2023.
Selected from 2,000+ companies around the world based on revenue scale, revenue growth, user reach, geographic diversification, and margin profile, this impressive group is reaching an estimated 3 billion people and generating an estimated $23 billion in revenue.
Students at the University of Nottingham will be learning through a dedicated VR classroom, enabling remote viewing and teaching for students and lecturers.
Based in the university’s Engineering Science and Learning Centre (ELSC), this classroom, believed to be the first in the UK to use a dedicated VR classroom, using 40 VR headsets, 35 of which are tethered overhead to individual PCs, with five available as traditional, desk-based systems with display screens.
I admit that I was excited to see this article and I congratulate the University of Nottingham on their vision here. I hope that they can introduce more use cases and applications to provide evidence of VR’s headway.
As I look at virtual reality…
On the plus side, I’ve spoken with people who love to use their VR-based headsets for fun workouts/exercises. I’ve witnessed the sweat, so I know that’s true. And I believe there is value in having the ability to walk through museums that one can’t afford to get to. And I’m sure that the gamers have found some incredibly entertaining competitions out there. The experience of being immersed can be highly engaging. So there are some niche use cases for sure.
But on the negative side, the technologies surrounding VR haven’t progressed as much as I thought they would have by now. For example, I’m disappointed Apple’s taken so long to put a product out there, and I don’t want to invest $3500 in their new product. From the reviews and items on social media that I’ve seen, the reception is lukewarm. At the most basic level, I’m not sure people want to wear a headset for more than a few minutes.
So overall, I’d like to see more use cases and less nausea.