…you will see that they outline which skills you should consider mastering in 2025 if you want to stay on top of the latest career opportunities. They then list more information about the skills, how you apply the skills, and WHERE to get those skills.
I assert that in the future, people will be able to see this information on a 24x7x365 basis.
Which jobs are in demand?
What skills do I need to do those jobs?
WHERE do I get/develop those skills?
And that last part (about the WHERE do I develop those skills) will pull from many different institutions, people, companies, etc.
BUT PEOPLE are the key! Often times, we need to — and prefer to — learn with others!
I wish I could write that the last two years have made me more confident, more self-assured that AI is here to augment workers rather than replace them.
But I can’t.
I wish I could write that I know where schools and colleges will end up. I wish I could say that AI Agents will help us get where we need to be.
But I can’t.
At this point, today, I’m at a loss. I’m not sure where the rise of AI agents will take us, in terms of how we work and learn. I’m in the question-asking part of my journey. I have few answers.
So, let’s talk about where (I think) AI Agents will take education. And who knows? Maybe as I write I’ll come up with something more concrete.
It’s worth a shot, right?
From DSC: I completely agree with Jason’s following assertion:
A good portion of AI advancement will come down to employee replacement. And AI Agents push companies towards that.
THAT’s where/what the ROI will be for corporations. They will make their investments up in the headcount area, and likely in other areas as well (product design, marketing campaigns, engineering-related items, and more). But how much time it takes to get there is a big question mark.
One last quote here…it’s too good not to include:
Behind these questions lies a more abstract, more philosophical one: what is the relationship between thinking and doing in a world of AI Agents and other kinds of automation?
By examining models across three AI families—Claude, ChatGPT, and Gemini—I’ve started to identify each model’s strengths, limitations, and typical pitfalls.
Spoiler: my findings underscore that until we have specialised, fine-tuned AI copilots for instructional design, we should be cautious about relying on general-purpose models and ensure expert oversight in all ID tasks.
From DSC — I’m going to (have Nick) say this again:
I simply asked my students to use AI to brainstorm their own learning objectives. No restrictions. No predetermined pathways. Just pure exploration. The results? Astonishing.
Students began mapping out research directions I’d never considered. They created dialogue spaces with AI that looked more like intellectual partnerships than simple query-response patterns.
Google Workspace for Education admins can now turn on the Gemini app with added data protection as an additional service for their teen users (ages 13+ or the applicable age in your country) in the following languages and countries. With added data protection, chats are not reviewed by human reviewers or otherwise used to improve AI models. The Gemini app will be a core service in the coming weeks for Education Standard and Plus users, including teens,
Recently, I spoke with several teachers regarding their primary questions and reflections on using AI in teaching and learning. Their thought-provoking responses challenge us to consider not only what AI can do but what it means for meaningful and equitable learning environments. Keeping in mind these reflections, we can better understand how we move forward toward meaningful AI integration in education.
We’re introducing FrontierMath, a benchmark of hundreds of original, expert-crafted mathematics problems designed to evaluate advanced reasoning capabilities in AI systems. These problems span major branches of modern mathematics—from computational number theory to abstract algebraic geometry—and typically require hours or days for expert mathematicians to solve.
The demand for artificial intelligence courses in UK universities has surged dramatically over the past five years, with enrollments increasing by 453%, according to a recent study by Currys, a UK tech retailer.
The study, which analyzed UK university admissions data and surveyed current students and recent graduates, reveals how the growing influence of AI is shaping students’ educational choices and career paths.
This growth reflects the broader trend of AI integration across industries, creating new opportunities while transforming traditional roles. With AI’s influence on career prospects rising, students and graduates are increasingly drawn to AI-related courses to stay competitive in a rapidly changing job market.
Is Generative AI and ChatGPT healthy for Students? — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky Beyond Text Generation: How AI Ignites Student Discovery and Deep Thinking, according to firsthand experiences of Teachers and AI researchers like Nick Potkalitsky.
After two years of intensive experimentation with AI in education, I am witnessing something amazing unfolding before my eyes. While much of the world fixates on AI’s generative capabilities—its ability to create essays, stories, and code—my students have discovered something far more powerful: exploratory AI, a dynamic partner in investigation and critique that’s transforming how they think.
…
They’ve moved beyond the initial fascination with AI-generated content to something far more sophisticated: using AI as an exploratory tool for investigation, interrogation, and intellectual discovery.
…
Instead of the much-feared “shutdown” of critical thinking, we’re witnessing something extraordinary: the emergence of what I call “generative thinking”—a dynamic process where students learn to expand, reshape, and evolve their ideas through meaningful exploration with AI tools. Here I consciously reposition the term “generative” as a process of human origination, although one ultimately spurred on by machine input.
A Road Map for Leveraging AI at a Smaller Institution — from er.educause.edu by Dave Weil and Jill Forrester Smaller institutions and others may not have the staffing and resources needed to explore and take advantage of developments in artificial intelligence (AI) on their campuses. This article provides a roadmap to help institutions with more limited resources advance AI use on their campuses.
The following activities can help smaller institutions better understand AI and lay a solid foundation that will allow them to benefit from it.
Understand the impact…
Understand the different types of AI tools…
Focus on institutional data and knowledge repositories…
Smaller institutions do not need to fear being left behind in the wake of rapid advancements in AI technologies and tools. By thinking intentionally about how AI will impact the institution, becoming familiar with the different types of AI tools, and establishing a strong data and analytics infrastructure, institutions can establish the groundwork for AI success. The five fundamental activities of coordinating, learning, planning and governing, implementing, and reviewing and refining can help smaller institutions make progress on their journey to use AI tools to gain efficiencies and improve students’ experiences and outcomes while keeping true to their institutional missions and values.
That is what they are doing here. Lesson plans focus on learners rather than the traditional teacher-centric model. Assessing prior strengths and weaknesses, personalising to focus more on weaknesses and less on things known or mastered. It’s adaptive, personalised learning. The idea that everyone should learn at the exactly same pace, within the same timescale is slightly ridiculous, ruled by the need for timetabling a one to many, classroom model.
For the first time in the history of our species we have technology that performs some of the tasks of teaching. We have reached a pivot point where this can be tried and tested. My feeling is that we’ll see a lot more of this, as parents and general teachers can delegate a lot of the exposition and teaching of the subject to the technology. We may just see a breakthrough that transforms education.
Agentic AI will be the top tech trend for 2025, according to research firm Gartner. The term describes autonomous machine “agents” that move beyond query-and-response generative chatbots to do enterprise-related tasks without human guidance.
…
More realistic challenges that the firm has listed elsewhere include:
Agentic AI proliferating without governance or tracking;
Agentic AI making decisions that are not trustworthy;
All or nothing at Educause24 — from onedtech.philhillaa.com by Kevin Kelly Looking for specific solutions at the conference exhibit hall, with an educator focus
Here are some notable trends:
Alignment with campus policies: …
Choose your own AI adventure: …
Integrate AI throughout a workflow: …
Moving from prompt engineering to bot building: …
More complex problem-solving: …
…
Not all AI news is good news. In particular, AI has exacerbated the problem of fraudulent enrollment–i.e., rogue actors who use fake or stolen identities with the intent of stealing financial aid funding with no intention of completing coursework.
…
The consequences are very real, including financial aid funding going to criminal enterprises, enrollment estimates getting dramatically skewed, and legitimate students being blocked from registering for classes that appear “full” due to large numbers of fraudulent enrollments.
SALT LAKE CITY, Oct. 22, 2024 /PRNewswire/ — Instructure, the leading learning ecosystem and UPCEA, the online and professional education association, announced the results of a survey on whether institutions are leveraging AI to improve learner outcomes and manage records, along with the specific ways these tools are being utilized. Overall, the study revealed interest in the potential of these technologies is far outpacing adoption. Most respondents are heavily involved in developing learner experiences and tracking outcomes, though nearly half report their institutions have yet to adopt AI-driven tools for these purposes. The research also found that only three percent of institutions have implemented Comprehensive Learner Records (CLRs), which provide a complete overview of an individual’s lifelong learning experiences.
In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests.
Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.
The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.
It is bonkers that I can write out all my life goals on a sheet of paper, take a photo of it, and just ask Claude or ChatGPT for help.
I get a complete plan, milestones, KPIs, motivation, and even action support to get there.
As beta testers, we’re shaping the tools of tomorrow. As researchers, we’re pioneering new pedagogical approaches. As ethical guardians, we’re ensuring that AI enhances rather than compromises the educational experience. As curators, we’re guiding students through the wealth of information AI provides. And as learners ourselves, we’re staying at the forefront of educational innovation.
The Tutoring Revolution— from educationnext.org by Holly Korbey More families are seeking one-on-one help for their kids. What does that tell us about 21st-century education?
Recent research suggests that the number of students seeking help with academics is growing, and that over the last couple of decades, more families have been turning to tutoring for that help.
…
What the Future Holds Digital tech has made private tutoring more accessible, more efficient, and more affordable. Students whose families can’t afford to pay $75 an hour at an in-person center can now log on from home to access a variety of online tutors, including Outschool, Wyzant, and Anchorbridge, and often find someone who can cater to their specific skills and needs—someone who can offer help in French to a student with ADHD, for example. Online tutoring is less expensive than in-person programs. Khan Academy’s Khanmigo chatbot can be a student’s virtual AI tutor, no Zoom meeting required, for $4 a month, and nonprofits like Learn to Be work with homeless shelters and community centers to give virtual reading and math tutoring free to kids who can’t afford it and often might need it the most.
Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time,Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.
Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.
Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.
On Tuesday, Workera announced Sage, an AI agent you can talk with that’s designed to assess an employee’s skill level, goals, and needs. After taking some short tests, Workera claims Sage will accurately gauge how proficient someone is at a certain skill. Then, Sage can recommend the appropriate online courses through Coursera, Workday, or other learning platform partners.Through chatting with Sage, Workera is designed to meet employees where they are, testing their skills in writing, machine learning, or math, and giving them a path to improve.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
Going forward, the opportunity for AI agents will be “gigantic,” according to Nvidia founder and CEO Jensen Huang.
Already, progress is “spectacular and surprising,” with AI development moving faster and faster and the industry getting into the “flywheel zone” that technology needs to advance, Huang said in a fireside chat at Salesforce’s flagship event Dreamforce this week.
“This is an extraordinary time,” Huang said while on stage with Marc Benioff, Salesforce chair, CEO and co-founder. “In no time in history has technology moved faster than Moore’s Law. We’re moving way faster than Moore’s Law, are arguably reasonably Moore’s Law squared.”
“We’ll have agents working with agents, agents working with us,” said Huang.
When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.
This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.
The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.
… Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.
Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.
Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.
The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall] The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.
[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.
Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.
What is the AI Risk Repository? The AI Risk Repository has three parts:
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AIRisks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.
Per Oncely:
The Details:
Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.
Over the coming days, start creating and chatting with Gems: customizable versions of Gemini that act as topic experts. ?
We’re also launching premade Gems for different scenarios – including Learning coach to break down complex topics and Coding partner to level up your skills… pic.twitter.com/2Dk8NxtTCE
We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.
Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.
Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.
In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.
…
Citing AWS guidance, ZDNET counts six different potential types of AI agents:
Simple reflex agents for tasks like resetting passwords
Model-based reflex agents for pro vs. con decision making
Goal-/rule-based agents that compare options and select the most efficient pathways
Utility-based agents that compare for value
Learning agents
Hierarchical agents that manage and assign subtasks to other agents
What An Agent Is
Agents are computer programs that can autonomously perform tasks, make decisions and interact with humans or other computers. There are many different types of agents, and they are designed to achieve specific goals spanning our lives and nearly every industry, making them an integral and unstoppable part of our future.
Learning: AI agents will transform education by providing personalized learning experiences such as one-to-one tutoring. ChatGPT and other large language models (LLMs) are providing access to all digital knowledge now. An “agent” would act as a more personalized version of an LLM.
The hacking and control of an AI agent could lead to disastrous consequences, affecting privacy, security, the economy and societal stability. Proactive and comprehensive security strategies are essential to mitigate these risks in the future.
Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.
It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?
The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.
For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?
Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.
*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments
Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.
In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11
Therefore, I believe we must double down on a learning-centered model in the age of AI.
The first step is to fully and enthusiastically embrace AI.
…
The second step is to find the “jagged technological frontier” of using AI in the college classroom.
What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line
ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.
When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.
Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.
While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.
…
…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.
InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan Another solid conference from the market leader, even with unclear roadmap
The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.
Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.
…
Other AI-based features include.
Discussion tool summaries and AI-generated responses…
The landscape of education is on the brink of a profound transformation, driven by rapid advancements in artificial intelligence. This shift was highlighted recently by Andrej Karpathy’s announcement of Eureka Labs, a venture aimed at creating an “AI-native” school. As we look ahead, it’s clear that the integration of AI in education will reshape how we learn, teach, and think about schooling altogether.
…
Traditional textbooks will begin to be replaced by interactive, AI-powered learning materials that adapt in real-time to a student’s progress.
…
As we approach 2029, the line between physical and virtual learning environments will blur significantly.
Curriculum design will become more flexible and personalized, with AI systems suggesting learning pathways based on each student’s interests, strengths, and career aspirations. … The boundaries between formal education and professional development will blur, creating a continuous learning ecosystem.
Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.
“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.
The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.
What teachers want from AI — from hechingerreport.org by Javeria Salman When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more
An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.
These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.
Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.
Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.
Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).
…
Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.
To be sure, an AI coach is no replacement for human coaching.
We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.
Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler What happens when students use AI all the time, but aren’t allowed to talk about it?
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.
In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.
Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.
… Copilot for Microsoft 365 – Educator features Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs. Standards alignment and creation Quiz generation through Copilot in Forms Suggested AI Feedback for Educators Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot. Education data integration
Copilot for Microsoft 365 – Student features Interactive practice experiences
Flashcards activity
Guided chat activity Learning extension in Copilot for Microsoft 365
…
New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.