Below comments/notes are from DSC (with thanks to Roberto Ferraro for this resource):
according to Dan Pink, intrinsic motivation is very powerful — much more powerful for many types of “messy/unclear” cognitive work (vs. clear, more mechanical types of work). What’s involved here according to Pink? Autonomy, mastery, and purpose. 

Dan Pink makes his case in the video below. My question is:

  • If this is true, how might this be applied to education/training/lifelong learning?

From DSC (cont’d):

As Dan mentions, we each know this to be true. For example, for each of our kids, my wife and I introduced them to a variety of things — music, sports, art, etc. We kept waiting for them to discover which thing(s) that THEY wanted to pursue. Perhaps we’ll find out that this was the wrong thing to do. but according to Pink, it’s aligned with the type of energy and productivity that gets released when we pursue something that we want to pursue. Plus creativity flows in this type of setting. 

Again, my thanks to Roberto Ferraro for resurfacing this item as his “One ‘must read’ for this week” item of his newsletter.


Learners need: More voice. More choice. More control. -- this image was created by Daniel Christian

 

Is It Time to Rethink the Traditional Grading System? — from edsurge.com by Jeffrey R. Young and Robert Talbert

Excerpt:

After that, this professor vowed never to use traditional grades on tests again. But he wasn’t quite sure what to replace them with.

As Talbert soon discovered, there’s a whole world of so-called alternative grading systems. So many, in fact, that he ended up co-writing an entire book about them with a colleague at his university, David Clark. The book, which is due out this summer, is called “Grading for Growth: A Guide to Alternative Grading Practices that Promote Authentic Learning and Student Engagement in Higher Education.

EdSurge connected with Talbert to hear what he uses in his classes now, and why he argues that reforming how grading works is key to increasing student engagement.

 

This company adopted AI. Here’s what happened to its human workers — from npr.org by Greg Rosalsky|

Excerpt:

What the economists found offers potentially great news for the economy, at least in one dimension that is crucial to improving our living standards: AI caused a group of workers to become much more productive. Backed by AI, these workers were able to accomplish much more in less time, with greater customer satisfaction to boot. At the same time, however, the study also shines a spotlight on just how powerful AI is, how disruptive it might be, and suggests that this new, astonishing technology could have economic effects that change the shape of income inequality going forward.

The article links to:
Generative AI at Work — from nber.org by Erik Brynjolfsson, Danielle Li & Lindsey R. Raymond

We study the staggered introduction of a generative AI-based conversational assistant using data from 5,179 customer support agents. Access to the tool increases productivity, as measured by issues resolved per hour, by 14 percent on average, with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers. We provide suggestive evidence that the AI model disseminates the potentially tacit knowledge of more able workers and helps newer workers move down the experience curve. In addition, we show that AI assistance improves customer sentiment, reduces requests for managerial intervention, and improves employee retention.

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

Nurturing student learning and motivation through the application of cognitive science — from deansforimpact.org by Cece Zhou

Excerpt:

In particular, TutorND’s emphasis on applying principles of cognitive science – the science of our how minds work – in tutoring practice has not only bolstered the interest and confidence of some of its tutors to pursue teaching, but also strengthened their instructional skills and meaningfully contributed to PK-12 student growth.

Today, TutorND trains and supports 175 tutors in schools across the greater South Bend community and across the country. Given that these tutors are students, faculty, and staff interested in cognitive science research and its application to student learning, they’re able to bridge theory and practice, assess the effectiveness of instructional moves, and foster learning experiences for students that are rigorous, affirming, and equitable.

 

New NVIDIA Research — excerpted section from this version of The Rundown

The NVIDIA research team just dropped a new research paper on creating high-quality short videos from text prompts. This technique uses Video Latent Diffusion Models (Video LDMs), which work efficiently without using too much computing power.

It can create 113 frame-long videos at 1280×2048 resolution, rendered at 24 FPS, resulting in 4.7-second clips. The team first trained the model on images, then added a time dimension to make it work with videos.

This new research is impressive. At the current pace of development, we may soon be able to generate full-length movies from just a handful of text prompts within the next few years.


Also relevant/see:


 


 
 

The Secret to Great Learning Design? Focus on Problems, not Solutions — from drphilippahardman.substack.com by Dr. Philippa Hardman
What a recent resurgence of research into problem-based learning has taught us about the value & impact of problem-based approaches

Excerpts:

Problem-based learning is an instructional approach that engages students in active, collaborative, and self-directed learning by exploring complex, real-world problems (rather than sitting and listening to a stage on the stage).

In a Problem-based learning scenario, students work in small groups and, under the guidance of a facilitator or instructor, identify, research, and analyse a problem before proposing and evaluating potential solutions and reaching a resolution.

Here are five of the most interesting research projects published on problem-based learning in the last few months:

 

Exploring generative AI and the implications for universities — from universityworldnews.com

Excerpt:

This is part of a weekly University World News special report series on ‘AI and higher education’. The focus is on how universities are engaging with ChatGPT and other generative artificial intelligence tools. The articles from academics and our journalists around the world are exploring developments and university work in AI that have implications for higher education institutions and systems, students and staff, and teaching, learning and research.

AI and higher education -- a report from University World News

 

Teaching: A University-Wide Language for Learning — from chronicle.com by Beckie Supiano

Excerpt (emphasis DSC):

Last week, as I was interviewing Shaun Vecera about a new initiative he directs at the University of Iowa, he made a comment that stopped me in my tracks. The initiative, Learning at Iowa, is meant to create a common vocabulary, based on cognitive science, to support learning across the university. It focuses on “the three M’s for effective learning”: mind-set, metacognition, and memory.

“Not because those are the wrong ways of talking about that. But when you talk about learning, I think you can easily see how these skills transfer across not just courses, but also transfer from the university into a career.”


From DSC:
This reminds me of what I was trying to get at here — i.e., let’s provide folks with more information on learning how to learn.

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn


Also relevant/see:

Changing your teaching takes more than a recipe — — from chronicle.com by Beckie Supiano
Professors have been urged to adopt more effective practices. Why are their results so mixed?

Excerpts:

When the researchers asked their interview subjects how they first learned about peer instruction, many more cited informal discussions with colleagues than cited more formal channels like workshops. Even fewer pointed to a book or an article.

So even when there’s a really well-developed recipe, professors aren’t necessarily reading it.

In higher ed, teaching is often seen as something anyone who knows the content can automatically do. But the evidence suggests instead that teaching is an intellectual exercise that adds to subject-matter expertise.

This teaching-specific math knowledge, the researchers note, could be acquired in teacher preparation or professional development, however, it’s usually created on the job.

“Now, I’m much more apt to help them develop a deeper understanding of how people learn from a neuroscientific and cognitive-psychology perspective, and have them develop a model for how students learn.”

Erika Offerdahl, associate vp and director of the Transformational Change Initiative at WSU

From DSC:
I love this part too:

There’s a role here, too, for education researchers. Not every evidence-based teaching practice has been broken into its critical components in the literature,

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing — from chronicle.com by Ben Chrisinger (behind a paywall)
And there’s little agreement on the rules that should govern it.

Excerpt:

While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.

 

Does ‘Flipped Learning’ Work? A New Analysis Dives Into the Research — from edsurge.com by Jeffrey R. Young

Excerpt:

The researchers do think that flipped learning has merit — if it is done carefully. They end their paper by presenting a model of flipped learning they refer to as “fail, flip, fix and feed,” which they say applies the most effective aspects they learned from their analysis. Basically they argue that students should be challenged with a problem even if they can’t properly solve it because they haven’t learned the material yet, and then the failure to solve it will motivate them to watch the lecture looking for the necessary information. Then classroom time can be used to fix student misconceptions, with a mix of a short lecture and student activities. Finally, instructors assess the student work and give feedback.

From DSC:
Interesting. I think their “fail, flip, fix and feed” method makes sense.

Also, I do think there’s merit in presenting information ahead of time so that students can *control the pace* of listening/processing/absorbing what’s being relayed. (This is especially helpful for native language differences.) If flipped learning would have been a part of my college experience, it would have freed me from just being a scribe. I could have tried to actually process the information while in class.

 

Why Studying Is So Hard, and What Teachers Can Do to Help — from edutopia.org by Laura McKenna
Beginning in the upper elementary grades, research-backed study skills should be woven into the curriculum, argues psychology professor Daniel Willingham in a new book.

Excerpt:

The additional context for Willingham’s new book is that students often don’t know the best methods to study for tests, master complex texts, or take productive notes, and it’s difficult to explain to them why they should take a different tack. In the book, Willingham debunks popular myths about the best study strategies, explains why they don’t work, and recommends effective strategies that are based on the latest research in cognitive science.

I recently spoke to him about why listening to lectures isn’t like watching a movie, how our self-monitoring of learning is often flawed and self-serving, and when it’s too late to start teaching students good study skills.

 

Take Your Words From Lecture to Page — from chronicle.com by Rachel Toor
What compelling lecturers do, and how their techniques can translate to good writing.

Excerpts:

Thing is, many of the moves that the best lecturers make on the stage can translate to the page and help you draw in readers. That is especially important in writing textbooks and other work for general readers. If you can bring the parts of yourself that work in the classroom to the prose, you will delight readers as much as you do your students.

Narrative can be key. Data and research aren’t enough in either the classroom or on the page. People like to be told stories. If you want to be persuasive in both realms, use narrative to make arguments. Don’t forget that much scholarly work is really a quest. What journey can you take a reader on?

It’s a performance on the page, too. A great lecture is a performance. So is great writing.

Raise real questions the reader will want answers to. 

 
© 2024 | Daniel Christian