The Tutoring Revolution— from educationnext.org by Holly Korbey More families are seeking one-on-one help for their kids. What does that tell us about 21st-century education?
Recent research suggests that the number of students seeking help with academics is growing, and that over the last couple of decades, more families have been turning to tutoring for that help.
…
What the Future Holds Digital tech has made private tutoring more accessible, more efficient, and more affordable. Students whose families can’t afford to pay $75 an hour at an in-person center can now log on from home to access a variety of online tutors, including Outschool, Wyzant, and Anchorbridge, and often find someone who can cater to their specific skills and needs—someone who can offer help in French to a student with ADHD, for example. Online tutoring is less expensive than in-person programs. Khan Academy’s Khanmigo chatbot can be a student’s virtual AI tutor, no Zoom meeting required, for $4 a month, and nonprofits like Learn to Be work with homeless shelters and community centers to give virtual reading and math tutoring free to kids who can’t afford it and often might need it the most.
Duolingo’s new Video Call feature represents a leap forward in language practice for learners. This AI-powered tool allows Duolingo Max subscribers to engage in spontaneous, realistic conversations with Lily, one of Duolingo’s most popular characters. The technology behind Video Call is designed to simulate natural dialogue and provides a personalized, interactive practice environment. Even beginner learners can converse in a low-pressure environment because Video Call is designed to adapt to their skill level. By offering learners the opportunity to converse in real-time,Video Call builds the confidence needed to communicate effectively in real-world situations. Video Call is available for Duolingo Max subscribers learning English, Spanish, and French.
Ello, the AI reading companion that aims to support kids struggling to read, launched a new product on Monday that allows kids to participate in the story-creation process.
Called “Storytime,” the new AI-powered feature helps kids generate personalized stories by picking from a selection of settings, characters, and plots. For instance, a story about a hamster named Greg who performed in a talent show in outer space.
On Tuesday, Workera announced Sage, an AI agent you can talk with that’s designed to assess an employee’s skill level, goals, and needs. After taking some short tests, Workera claims Sage will accurately gauge how proficient someone is at a certain skill. Then, Sage can recommend the appropriate online courses through Coursera, Workday, or other learning platform partners.Through chatting with Sage, Workera is designed to meet employees where they are, testing their skills in writing, machine learning, or math, and giving them a path to improve.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
Going forward, the opportunity for AI agents will be “gigantic,” according to Nvidia founder and CEO Jensen Huang.
Already, progress is “spectacular and surprising,” with AI development moving faster and faster and the industry getting into the “flywheel zone” that technology needs to advance, Huang said in a fireside chat at Salesforce’s flagship event Dreamforce this week.
“This is an extraordinary time,” Huang said while on stage with Marc Benioff, Salesforce chair, CEO and co-founder. “In no time in history has technology moved faster than Moore’s Law. We’re moving way faster than Moore’s Law, are arguably reasonably Moore’s Law squared.”
“We’ll have agents working with agents, agents working with us,” said Huang.
When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.
This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.
The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.
… Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.
Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.
Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.
The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall] The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.
[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.
Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.
What is the AI Risk Repository? The AI Risk Repository has three parts:
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AIRisks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.
Per Oncely:
The Details:
Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.
Over the coming days, start creating and chatting with Gems: customizable versions of Gemini that act as topic experts. ?
We’re also launching premade Gems for different scenarios – including Learning coach to break down complex topics and Coding partner to level up your skills… pic.twitter.com/2Dk8NxtTCE
We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.
Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.
Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.
In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.
…
Citing AWS guidance, ZDNET counts six different potential types of AI agents:
Simple reflex agents for tasks like resetting passwords
Model-based reflex agents for pro vs. con decision making
Goal-/rule-based agents that compare options and select the most efficient pathways
Utility-based agents that compare for value
Learning agents
Hierarchical agents that manage and assign subtasks to other agents
What An Agent Is
Agents are computer programs that can autonomously perform tasks, make decisions and interact with humans or other computers. There are many different types of agents, and they are designed to achieve specific goals spanning our lives and nearly every industry, making them an integral and unstoppable part of our future.
Learning: AI agents will transform education by providing personalized learning experiences such as one-to-one tutoring. ChatGPT and other large language models (LLMs) are providing access to all digital knowledge now. An “agent” would act as a more personalized version of an LLM.
The hacking and control of an AI agent could lead to disastrous consequences, affecting privacy, security, the economy and societal stability. Proactive and comprehensive security strategies are essential to mitigate these risks in the future.
Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.
It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?
The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.
For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?
Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.
*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments
Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.
In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11
Therefore, I believe we must double down on a learning-centered model in the age of AI.
The first step is to fully and enthusiastically embrace AI.
…
The second step is to find the “jagged technological frontier” of using AI in the college classroom.
What aspects of teaching should remain human? — from hechingerreport.org by Chris Berdik Even techno optimists hesitate to say teaching is best left to the bots, but there’s a debate about where to draw the line
ATLANTA — Science teacher Daniel Thompson circulated among his sixth graders at Ron Clark Academy on a recent spring morning, spot checking their work and leading them into discussions about the day’s lessons on weather and water. He had a helper: As Thompson paced around the class, peppering them with questions, he frequently turned to a voice-activated AI to summon apps and educational videos onto large-screen smartboards.
When a student asked, “Are there any animals that don’t need water?” Thompson put the question to the AI. Within seconds, an illustrated blurb about kangaroo rats appeared before the class.
Nitta said there’s something “deeply profound” about human communication that allows flesh-and-blood teachers to quickly spot and address things like confusion and flagging interest in real time.
While the traditional model of education is entrenched, emerging technologies like deep learning promise to shake its foundations and usher in an age of personalized, adaptive, and egalitarian education. It is expected to have a significant impact across higher education in several key ways.
…
…deep learning introduces adaptivity into the learning process. Unlike a typical lecture, deep learning systems can observe student performance in real-time. Confusion over a concept triggers instant changes to instructional tactics. Misconceptions are identified early and remediated quickly. Students stay in their zone of proximal development, constantly challenged but never overwhelmed. This adaptivity prevents frustration and stagnation.
InstructureCon 24 Conference Notes — from onedtech.philhillaa.com by Glenda Morgan Another solid conference from the market leader, even with unclear roadmap
The new stuff: AI
Instructure rolled out multiple updates and improvements – more than last year. These included many AI-based or focused tools and services as well as some functional improvements. I’ll describe the AI features first.
Sal Khan was a surprise visitor to the keynote stage to announce the September availability of the full suite of AI-enabled Khanmigo Teacher Tools for Canvas users. The suite includes 20 tools, such as tools to generate lesson plans and quiz questions and write letters of recommendation. Next year, they plan to roll out tools for students themselves to use.
…
Other AI-based features include.
Discussion tool summaries and AI-generated responses…
The landscape of education is on the brink of a profound transformation, driven by rapid advancements in artificial intelligence. This shift was highlighted recently by Andrej Karpathy’s announcement of Eureka Labs, a venture aimed at creating an “AI-native” school. As we look ahead, it’s clear that the integration of AI in education will reshape how we learn, teach, and think about schooling altogether.
…
Traditional textbooks will begin to be replaced by interactive, AI-powered learning materials that adapt in real-time to a student’s progress.
…
As we approach 2029, the line between physical and virtual learning environments will blur significantly.
Curriculum design will become more flexible and personalized, with AI systems suggesting learning pathways based on each student’s interests, strengths, and career aspirations. … The boundaries between formal education and professional development will blur, creating a continuous learning ecosystem.
Meanwhile, a separate survey of faculty released Thursday by Ithaka S+R, a higher education consulting firm, showcased that faculty—while increasingly familiar with AI—often do not know how to use it in classrooms. Two out of five faculty members are familiar with AI, the Ithaka report found, but only 14 percent said they are confident in their ability to use AI in their teaching. Just slightly more (18 percent) said they understand the teaching implications of generative AI.
“Serious concerns about academic integrity, ethics, accessibility, and educational effectiveness are contributing to this uncertainty and hostility,” the Ithaka report said.
The diverging views about AI are causing friction. Nearly a third of students said they have been warned to not use generative AI by professors, and more than half (59 percent) are concerned they will be accused of cheating with generative AI, according to the Pearson report, which was conducted with Morning Consult and surveyed 800 students.
What teachers want from AI — from hechingerreport.org by Javeria Salman When teachers designed their own AI tools, they built math assistants, tools for improving student writing, and more
An AI chatbot that walks students through how to solve math problems. An AI instructional coach designed to help English teachers create lesson plans and project ideas. An AI tutor that helps middle and high schoolers become better writers.
These aren’t tools created by education technology companies. They were designed by teachers tasked with using AI to solve a problem their students were experiencing.
Over five weeks this spring, about 300 people – teachers, school and district leaders, higher ed faculty, education consultants and AI researchers – came together to learn how to use AI and develop their own basic AI tools and resources. The professional development opportunity was designed by technology nonprofit Playlab.ai and faculty at the Relay Graduate School of Education.
Next-Gen Classroom Observations, Powered by AI — from educationnext.org by Michael J. Petrilli The use of video recordings in classrooms to improve teacher performance is nothing new. But the advent of artificial intelligence could add a helpful evaluative tool for teachers, measuring instructional practice relative to common professional goals with chatbot feedback.
Multiple companies are pairing AI with inexpensive, ubiquitous video technology to provide feedback to educators through asynchronous, offsite observation. It’s an appealing idea, especially given the promise and popularity of instructional coaching, as well as the challenge of scaling it effectively (see “Taking Teacher Coaching To Scale,” research, Fall 2018).
…
Enter AI. Edthena is now offering an “AI Coach” chatbot that offers teachers specific prompts as they privately watch recordings of their lessons. The chatbot is designed to help teachers view their practice relative to common professional goals and to develop action plans to improve.
To be sure, an AI coach is no replacement for human coaching.
We need to shift our thinking about GenAI tutors serving only as personal learning tools. The above activities illustrate how these tools can be integrated into contemporary classroom instruction. The activities should not be seen as prescriptive but merely suggestive of how GenAI can be used to promote social learning. Although I specifically mention only one online activity (“Blended Learning”), all can be adapted to work well in online or blended classes to promote social interaction.
Stealth AI — from higherai.substack.com by Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler What happens when students use AI all the time, but aren’t allowed to talk about it?
In many ways, this comes back to one of my general rules: You cannot ban AI in the classroom. You can only issue a gag rule.
And if you do issue a gag rule, then it deprives students of the space they often need to make heads and tails of this technology.
We need to listen to actual students talking about actual uses, and reflecting on their actual feelings. No more abstraction.
In this conversation, Jason Gulya (a Professor of English at Berkeley College) talks to Zack Kinzler about what students are saying about Artificial Intelligence and education.
Welcome to our monthly update for Teams for Education and thank you so much for being part of our growing community! We’re thrilled to share over 20 updates and resources and show them in action next week at ISTELive 24 in Denver, Colorado, US.
… Copilot for Microsoft 365 – Educator features Guided Content Creation
Coming soon to Copilot for Microsoft 365 is a guided content generation experience to help educators get started with creating materials like assignments, lesson plans, lecture slides, and more. The content will be created based on the educator’s requirements with easy ways to customize the content to their exact needs. Standards alignment and creation Quiz generation through Copilot in Forms Suggested AI Feedback for Educators Teaching extension
To better support educators with their daily tasks, we’ll be launching a built-in Teaching extension to help guide them through relevant activities and provide contextual, educator-based support in Copilot. Education data integration
Copilot for Microsoft 365 – Student features Interactive practice experiences
Flashcards activity
Guided chat activity Learning extension in Copilot for Microsoft 365
…
New AI tools for Google Workspace for Education — from blog.google by Akshay Kirtikar and Brian Hendricks We’re bringing Gemini to teen students using their school accounts to help them learn responsibly and confidently in an AI-first future, and empowering educators with new tools to help create great learning experiences.
We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.
From DSC: As is often the case, David put together a solid posting here. A few comments/reflections on it:
I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better. .
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
LearnLM is our new family of models fine-tuned for learning, and grounded in educational research to make teaching and learning experiences more active, personal and engaging.
We often talk about what Generative AI will do for coders, healthcare, science or even finance, but what about the benefits for the next generation? Permit me if you will, here I’m thinking about teachers and students.
It’s no secret that some of the most active users of ChatGPT in its heyday, were students. But how are other major tech firms thinking about this?
I actually think one of the best products with the highest ceiling from Google I/O 2024 is LearnLM. It has to be way more than a chatbot, it has to feel like a multimodal tutor. I can imagine frontier model agents (H) doing this fairly well.
What if everyone, everywhere could have their own personal AI tutor, on any topic?
ChatGPT4o Is the TikTok of AI Models — from nickpotkalitsky.substack.com by Nick Potkalitsky In Search of Better Tools for AI Access in K-12 Classrooms
Nick makes the case that we should pause on the use of OpenAI in the classrooms:
In light of these observations, it’s clear that we must pause and rethink the use of OpenAI products in our classrooms, except for rare cases where accessibility needs demand it. The rapid consumerization of AI, epitomized by GPT4o’s transformation into an AI salesperson, calls for caution.
[On 5/21/24] at Microsoft Build, Microsoft and Khan Academy announced a new partnership that aims to bring these time-saving and lesson-enhancing AI tools to millions of educators. By donating access to Azure AI-optimized infrastructure, Microsoft is enabling Khan Academy to offer all K-12 educators in the U.S. free access to the pilot of Khanmigo for Teachers, which will now be powered by Azure OpenAI Service.
The two companies will also collaborate to explore opportunities to improve AI tools for math tutoring in an affordable, scalable and adaptable way with a new version of Phi-3, a family of small language models (SLMs) developed by Microsoft.
Khan Academy’s AI assistant, Khanmigo, has earned praise for helping students to understand and practice everything from math to English, but it can also help teachers devise lesson plans, formulate questions about assigned readings, and even generate reading passages appropriate for students at different levels. More than just a chatbot, the software offers specific AI-powered tools for generating quizzes and assignment instructions, drafting lesson plans, and formulating letters of recommendation.
…
Having a virtual teaching assistant is especially valuable in light of recent research from the RAND Corporation that found teachers work longer hours than most working adults, which includes administrative and prep work outside the classroom.