From DSC:
I love the graphic below of the Dunning-Kruger Effect:


 

— graphic via a teacher at one of our daughters’ schools
.


The Dunning-Kruger effect is a cognitive bias where people with low ability in a task tend to overestimate their own competence, while high-ability individuals often underestimate theirs. This happens because those with low competence lack the metacognitive skills to recognize their own shortcomings, leading them to believe they are performing better than they are. Examples include a new driver who thinks they are better than average, or a novice who is confident in their ability to diagnose a medical issue based on a quick online search.

Examples in different fields

  • Driving: Many drivers believe they are above average, a statistical impossibility.
  • Healthcare: Patients may overestimate their ability to self-diagnose serious conditions after a quick search and disregard expert medical advice.
  • Workplace: Employees may overestimate their performance compared to their colleagues.
  • Social Media: The Dunning-Kruger effect can be seen online, where individuals with a superficial understanding of a topic may argue confidently with experts.
 

10 Tips from Smart Teaching Stronger Learning — from Pooja K. Agarwal, Ph.D.

Per Dr. Pooja Agarwal:

Combining two strategies—spacing and retrieval practice—is key to success in learning, says Shana Carpenter.


On a somewhat related note (i.e., for Instructional Designers, teachers, faculty members, T&L staff members), also see:

 

ChatGPT: the world’s most influential teacher — from drphilippahardman.substack.com by Dr. Philippa Hardman; emphasis DSC
New research shows that millions of us are “learning with AI” every week: what does this mean for how (and how well) humans learn?

This week, an important piece of research landed that confirms the gravity of AI’s role in the learning process. The TLDR is that learning is now a mainstream use case for ChatGPT; around 10.2% of all ChatGPT messages (that’s ~2BN messages sent by over 7 million users per week) are requests for help with learning.

The research shows that about 10.2% of all messages are tutoring/teaching, and within the “Practical Guidance” category, tutoring is 36%. “Asking” interactions are growing faster than “Doing” and are rated higher quality by users. Younger people contribute a huge share of messages, and growth is fastest in low- and middle-income countries (How People Use ChatGPT, 2025).

If AI is already acting as a global tutor, the question isn’t “will people learn with AI?”—they already are. The real question we need to ask is: what does great learning actually look like, and how should AI evolve to support it? That’s where decades of learning science help us separate “feels like learning” from “actually gaining new knowledge and skills”.

Let’s dive in.

 

15 Quick (and Mighty) Retrieval Practices — from edutopia.org by Daniel Leonard
From concept maps to flash cards to Pictionary, these activities help students reflect on—and remember—what they’ve learned.

But to genuinely commit information to long-term memory, there’s no replacement for active retrieval—the effortful practice of recalling information from memory, unaided by external sources like notes or the textbook. “Studying this way is mentally difficult,” Willingham acknowledged, “but it’s really, really good for memory.”

From low-stakes quizzes to review games to flash cards, there are a variety of effective retrieval practices that teachers can implement in class or recommend that students try at home. Drawing from a wide range of research, we compiled this list of 15 actionable retrieval practices.


And speaking of cognitive science, also see:

‘Cognitive Science,’ All the Rage in British Schools, Fails to Register in U.S. — from the74million.org by Greg Toppo
Educators blame this ‘reverse Beatles effect’ on America’s decentralized system and grad schools that are often hostile to research.

When Zach Groshell zoomed in as a guest on a longstanding British education podcast last March, a co-host began the interview by telling listeners he was “very well-known over in the U.S.”

Groshell, a former Seattle-area fourth-grade teacher, had to laugh: “Nobody knows me here in the U.S.,” he said in an interview.

But in Britain, lots of teachers know his name. An in-demand speaker at education conferences, he flies to London “as frequently as I can” to discuss Just Tell Them, his 2024 book on explicit instruction. Over the past year, Groshell has appeared virtually about once a month and has made two personal appearances at events across England.

The reason? A discipline known as cognitive science. Born in the U.S., it relies on decades of research on how kids learn to guide teachers in the classroom, and is at the root of several effective reforms, including the Science of Reading.

 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 

The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI — from papers.ssrn.com by Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, and Terrence Sejnowski; via George Siemens

Abstract
In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities.
This chapter offers the first neuroscience-based explanation for the observed reversal of the Flynn Effect—the recent decline in IQ scores in developed countries—linking this downturn to shifts in educational practices and the rise of cognitive offloading via AI and digital tools. Drawing on insights from neuroscience, cognitive psychology, and learning theory, we explain how underuse of the brain’s declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility. Finally, we outline policy implications for education, workforce development, and the responsible integration of AI, advocating strategies that harness technology as a complement to – rather than a replacement for – robust human knowledge.

Keywords
cognitive offloading, memory, neuroscience of learning, declarative memory, procedural memory, generative AI, Flynn Effect, education reform, schemata, digital tools, cognitive load, cognitive architecture, reinforcement learning, basal ganglia, working memory, retrieval practice, schema theory, manifolds

 

‘What I learned when students walked out of my AI class’ — from timeshighereducation.com by Chris Hogg
Chris Hogg found the question of using AI to create art troubled his students deeply. Here’s how the moment led to deeper understanding for both student and educator

Teaching AI can be as thrilling as it is challenging. This became clear one day when three students walked out of my class, visibly upset. They later explained their frustration: after spending years learning their creative skills, they were disheartened to see AI effortlessly outperform them at the blink of an eye.

This moment stuck with me – not because it was unexpected, but because it encapsulates the paradoxical relationship we all seem to have with AI. As both an educator and a creative, I find myself asking: how do we engage with this powerful tool without losing ourselves in the process? This is the story of how I turned moments of resistance into opportunities for deeper understanding.


In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.


Make Instructional Design Fun Again with AI Agents — from drphilippahardman.substack.com by Dr. Philippa Hardman
A special edition practical guide to selecting & building AI agents for instructional design and L&D

Exactly how we do this has been less clear, but — fuelled by the rise of so-called “Agentic AI” — more and more instructional designers ask me: “What exactly can I delegate to AI agents, and how do I start?”

In this week’s post, I share my thoughts on exactly what instructional design tasks can be delegated to AI agents, and provide a step-by-step approach to building and testing your first AI agent.

Here’s a sneak peak….


AI Personality Matters: Why Claude Doesn’t Give Unsolicited Advice (And Why You Should Care) — from mikekentz.substack.com by Mike Kentz
First in a four-part series exploring the subtle yet profound differences between AI systems and their impact on human cognition

After providing Claude with several prompts of context about my creative writing project, I requested feedback on one of my novel chapters. The AI provided thoughtful analysis with pros and cons, as expected. But then I noticed what wasn’t there: the customary offer to rewrite my chapter.

Without Claude’s prompting, I found myself in an unexpected moment of metacognition. When faced with improvement suggestions but no offer to implement them, I had to consciously ask myself: “Do I actually want AI to rewrite this section?” The answer surprised me – no, I wanted to revise it myself, incorporating the insights while maintaining my voice and process.

The contrast was striking. With ChatGPT, accepting its offer to rewrite felt like a passive, almost innocent act – as if I were just saying “yes” to a helpful assistant. But with Claude, requesting a rewrite required deliberate action. Typing out the request felt like a more conscious surrender of creative agency.


Also re: metacognition and AI, see:

In the AI era, how do we battle cognitive laziness in students? — from timeshighereducation.com by Sean McMinn
With the latest AI technology now able to handle complex problem-solving processes, will students risk losing their own cognitive engagement? Metacognitive scaffolding could be the answer, writes Sean McMinn

The concern about cognitive laziness seems to be backed by Anthropic’s report that students use AI tools like Claude primarily for creating (39.8 per cent) and analysing (30.2 per cent) tasks, both considered higher-order cognitive functions according to Bloom’s Taxonomy. While these tasks align well with advanced educational objectives, they also pose a risk: students may increasingly delegate critical thinking and complex cognitive processes directly to AI, risking a reduction in their own cognitive engagement and skill development.

By prompting students to articulate their cognitive processes, such tools reinforce the internalisation of self-regulated learning strategies essential for navigating AI-augmented environments.


EDUCAUSE Panel Highlights Practical Uses for AI in Higher Ed — from govtech.com by Abby Sourwine
A webinar this week featuring panelists from the education, private and nonprofit sectors attested to how institutions are applying generative artificial intelligence to advising, admissions, research and IT.

Many higher education leaders have expressed hope about the potential of artificial intelligence but uncertainty about where to implement it safely and effectively. According to a webinar Tuesday hosted by EDUCAUSE, “Unlocking AI’s Potential in Higher Education,” their answer may be “almost everywhere.”

Panelists at the event, including Kaskaskia College CIO George Kriss, Canyon GBS founder and CEO Joe Licata and Austin Laird, a senior program officer at the Gates Foundation, said generative AI can help colleges and universities meet increasing demands for personalization, timely communication and human-to-human connections throughout an institution, from advising to research to IT support.


Partly Cloudy with a Chance of Chatbots — from derekbruff.org by Derek Bruff

Here are the predictions, our votes, and some commentary:

  • “By 2028, at least half of large universities will embed an AI ‘copilot’ inside their LMS that can draft content, quizzes, and rubrics on demand.” The group leaned toward yes on this one, in part because it was easy to see LMS vendors building this feature in as a default.
  • “Discipline-specific ‘digital tutors’ (LLM chatbots trained on course materials) will handle at least 30% of routine student questions in gateway courses.” We learned toward yes on this one, too, which is why some of us are exploring these tools today. We would like to be ready how to use them well (or avoid their use) when they are commonly available.
  • “Adaptive e-texts whose examples, difficulty, and media personalize in real time via AI will outsell static digital textbooks in the U.S. market.” We leaned toward no on this one, in part because the textbook market and what students want from textbooks has historically been slow to change. I remember offering my students a digital version of my statistics textbook maybe 6-7 years ago, and most students opted to print the whole thing out on paper like it was 1983.
  • “AI text detectors will be largely abandoned as unreliable, shifting assessment design toward oral, studio, or project-based ‘AI-resilient’ tasks.” We leaned toward yes on this. I have some concerns about oral assessments (they certainly privilege some students over others), but more authentic assignments seems like what higher ed needs in the face of AI. Ted Underwood recently suggested a version of this: “projects that attempt genuinely new things, which remain hard even with AI assistance.” See his post and the replies for some good discussion on this idea.
  • “AI will produce multimodal accessibility layers (live translation, alt-text, sign-language avatars) for most lecture videos without human editing.” We leaned toward yes on this one, too. This seems like another case where something will be provided by default, although my podcast transcripts are AI-generated and still need editing from me, so we’re not there quite yet.

‘We Have to Really Rethink the Purpose of Education’
The Ezra Klein Show

Description: I honestly don’t know how I should be educating my kids. A.I. has raised a lot of questions for schools. Teachers have had to adapt to the most ingenious cheating technology ever devised. But for me, the deeper question is: What should schools be teaching at all? A.I. is going to make the future look very different. How do you prepare kids for a world you can’t predict?

And if we can offload more and more tasks to generative A.I., what’s left for the human mind to do?

Rebecca Winthrop is the director of the Center for Universal Education at the Brookings Institution. She is also an author, with Jenny Anderson, of “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.” We discuss how A.I. is transforming what it means to work and be educated, and how our use of A.I. could revive — or undermine — American schools.


 

Outsourcing Thought: The Hidden Cost of Letting AI Think for You — from linkedin.com by Robert Atkinson

I’ve watched it unfold in real time. A student submits a flawless coding assignment or a beautifully written essay—clean syntax, sharp logic, polished prose. But when I ask them to explain their thinking, they hesitate. They can’t trace their reasoning or walk me through the process. The output is strong, but the understanding is shallow. As a professor, I’ve seen this pattern grow more common: AI-assisted work that looks impressive on the surface but reveals a troubling absence of cognitive depth underneath.

This article is written with my students in mind—but it’s meant for anyone navigating learning, teaching, or thinking in the age of artificial intelligence. Whether you’re a student, educator, or professional, the question is the same: What happens to the brain when we stop doing our own thinking?

We are standing at a pivotal moment. With just a few prompts, generative AI can produce essays, solve complex coding problems, and summarize ideas in seconds. It feels efficient. It feels like progress. But from a cognitive neuroscience perspective, that convenience comes at a hidden cost: the gradual erosion of the neural processes that support reasoning, creativity, and long-term learning.

 

Building Durable Skills into Middle School Career Exploration — from edmentum.com

As the needs of the modern workforce evolve at an unprecedented rate, durable, or “soft,” skills are often eclipsing demand for sought-after technical skills in high-demand jobs across industry sectors, geography, and educational level.

Through research, collaboration, and feedback from more than 800 educators, workforce professionals, industry leaders, and policymakers, America Succeeds—a leading educational policy and advocacy group—has developed Durable Skills and the Durable Skills Advantage Framework to provide a common language for the most in-demand durable skills. With 85% of career success being dependent on durable skills, this framework bridges the gap between the skills students are taught in school and evolving workforce needs.


On a somewhat related note, also see:

Green Workforce Connect and Building Green Pathways with Cynthia Finley — from gettingsmart.com by Mason Pashia

Over the last few years, we’ve been covering New Pathways, which we think of as a framework for school leaders and community members to create supports and systems that set students up for success in what’s next. This might be career exploration, client-connected projects, internships, or entrepreneurial experiences.

But what it really comes down to is connecting learners to real-world experiences and people and helping them articulate the skills that they gain in the process. Along the way, we began to talk a lot about green jobs. Many of the pre-existing pathways in secondary schools point towards CTE programs and trades, which are more in demand than they’ve been in decades.

This coincides with a pivotal moment in the arc of infrastructure redesign and development, one that heavily emphasizes clean energy trajectories and transferable skills. Many of these jobs we refer to as green pathways or requiring some of these green skills.

One leading organization in this space is the Interstate Renewable Energy Council or IREC. I got to sit down with Cynthia Finley, the Vice President of Workforce Strategy at IREC to talk about green pathways and what IREC is doing to increase awareness and exposure of green jobs and skills.

 

The Curiosity Matrix: 9 Habits of Curious Minds — from nesslabs.com by Anne-Laure Le Cunff; via Roberto Ferraro

As an adaptive trait, curiosity draws us to seek information and new experiences. It’s how we learn about ourselves, others, and the world.

They’re a diverse group of people, but the literature suggests that they share some common habits that support their personal and professional growth.

 

Conditions that trigger behaviour change — from peoplealchemy.com by Paul Matthews; via Learning Now TV

“Knowing is not enough; we must apply. Willing is not enough; we must do.”

Johann Wolfgang von Goethe

Learning Transfer’s ultimate outcome is behaviour change, so we must understand the conditions that trigger a behaviour to start.

According to Fogg, three specific elements must converge at the same moment for a specific behaviour to occur. Given that learning transfer is only successful when the learner starts behaving in the desired new ways, Fogg’s work is critical to understanding how to generate these new behaviours. The Fogg Behavioural Model [*1] states that B=MAP. That is, a specific behaviour will occur if at the same moment there is sufficient motivation, sufficient ability and sufficient prompt. If the behaviour does not occur, at least one of these three elements is missing or below the threshold required.

The prompt is, in effect, a call to action to do a specific behaviour. The prompt must be ‘loud’ enough for the target person to perceive it and be consciously aware of it. Once aware of a prompt, the target immediately, and largely unconsciously, assesses their ability to carry out the requested behaviour: how difficult would this be, how long will it take, who can help me, and so on. They base this on their perception of the difficulty of the requested behaviour, and their ability, as they see it, to achieve that behaviour.

 

Start these 3 classroom habits ASAP! — from etrievalpractice.org by Pooja K. Agarwal, Ph.D.

Habit #2: Engage students in a brain dump or two things as an entry ticket or exit ticket. Spend one minute or less having students write down everything (or just two things) they remember from class. The key: Don’t grade it! Keep retrieval practice no-stakes to emphasize it’s a learning strategy, not an assessment strategy.

Teaching from the heart in 13 steps — from timeshighereducation.com by Beiting He
Engaging your students through empathy requires teachers to share their own stories and vulnerabilities and foster a safe space for learning. Here, Beiting He offers 13 ways to create a caring classroom

Move student communication from passive to active using ‘I like, I wish, I wonder’ — from timeshighereducation.com by Rebeca Elizabeth Alvarado Ramírez
Rebeca Elizabeth Alvarado Ramírez introduces a methodology that encourages effective communication in digital learning processes

In summary, “I wish” is about proposing positive changes and improvements, while “I wonder” is about asking thoughtful questions to gain insight and foster meaningful conversations within the team.

 

Related topics from DSC:

  • Getting someone’s attention
  • Having the information sink in and mean something to someone
  • Inspiration
  • Goal setting
  • Motivation
  • Metacognition?
  • Getting psyched to try something new out!

From DSC: Engaged students do not just absorb content, they try to make meaning of what they study. Engaged learners ***care about*** the subject, ***feel motivated or excited*** to learn, and take ownership of their learning.

 

Nurturing student learning and motivation through the application of cognitive science — from deansforimpact.org by Cece Zhou

Excerpt:

In particular, TutorND’s emphasis on applying principles of cognitive science – the science of our how minds work – in tutoring practice has not only bolstered the interest and confidence of some of its tutors to pursue teaching, but also strengthened their instructional skills and meaningfully contributed to PK-12 student growth.

Today, TutorND trains and supports 175 tutors in schools across the greater South Bend community and across the country. Given that these tutors are students, faculty, and staff interested in cognitive science research and its application to student learning, they’re able to bridge theory and practice, assess the effectiveness of instructional moves, and foster learning experiences for students that are rigorous, affirming, and equitable.

 

Teaching: A University-Wide Language for Learning — from chronicle.com by Beckie Supiano

Excerpt (emphasis DSC):

Last week, as I was interviewing Shaun Vecera about a new initiative he directs at the University of Iowa, he made a comment that stopped me in my tracks. The initiative, Learning at Iowa, is meant to create a common vocabulary, based on cognitive science, to support learning across the university. It focuses on “the three M’s for effective learning”: mind-set, metacognition, and memory.

“Not because those are the wrong ways of talking about that. But when you talk about learning, I think you can easily see how these skills transfer across not just courses, but also transfer from the university into a career.”


From DSC:
This reminds me of what I was trying to get at here — i.e., let’s provide folks with more information on learning how to learn.

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn


Also relevant/see:

Changing your teaching takes more than a recipe — — from chronicle.com by Beckie Supiano
Professors have been urged to adopt more effective practices. Why are their results so mixed?

Excerpts:

When the researchers asked their interview subjects how they first learned about peer instruction, many more cited informal discussions with colleagues than cited more formal channels like workshops. Even fewer pointed to a book or an article.

So even when there’s a really well-developed recipe, professors aren’t necessarily reading it.

In higher ed, teaching is often seen as something anyone who knows the content can automatically do. But the evidence suggests instead that teaching is an intellectual exercise that adds to subject-matter expertise.

This teaching-specific math knowledge, the researchers note, could be acquired in teacher preparation or professional development, however, it’s usually created on the job.

“Now, I’m much more apt to help them develop a deeper understanding of how people learn from a neuroscientific and cognitive-psychology perspective, and have them develop a model for how students learn.”

Erika Offerdahl, associate vp and director of the Transformational Change Initiative at WSU

From DSC:
I love this part too:

There’s a role here, too, for education researchers. Not every evidence-based teaching practice has been broken into its critical components in the literature,

 
© 2025 | Daniel Christian