Exploring generative AI and the implications for universities — from universityworldnews.com

Excerpt:

This is part of a weekly University World News special report series on ‘AI and higher education’. The focus is on how universities are engaging with ChatGPT and other generative artificial intelligence tools. The articles from academics and our journalists around the world are exploring developments and university work in AI that have implications for higher education institutions and systems, students and staff, and teaching, learning and research.

AI and higher education -- a report from University World News

 

Teaching: A University-Wide Language for Learning — from chronicle.com by Beckie Supiano

Excerpt (emphasis DSC):

Last week, as I was interviewing Shaun Vecera about a new initiative he directs at the University of Iowa, he made a comment that stopped me in my tracks. The initiative, Learning at Iowa, is meant to create a common vocabulary, based on cognitive science, to support learning across the university. It focuses on “the three M’s for effective learning”: mind-set, metacognition, and memory.

“Not because those are the wrong ways of talking about that. But when you talk about learning, I think you can easily see how these skills transfer across not just courses, but also transfer from the university into a career.”


From DSC:
This reminds me of what I was trying to get at here — i.e., let’s provide folks with more information on learning how to learn.

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn

Lets provide folks with more information on learning how to learn


Also relevant/see:

Changing your teaching takes more than a recipe — — from chronicle.com by Beckie Supiano
Professors have been urged to adopt more effective practices. Why are their results so mixed?

Excerpts:

When the researchers asked their interview subjects how they first learned about peer instruction, many more cited informal discussions with colleagues than cited more formal channels like workshops. Even fewer pointed to a book or an article.

So even when there’s a really well-developed recipe, professors aren’t necessarily reading it.

In higher ed, teaching is often seen as something anyone who knows the content can automatically do. But the evidence suggests instead that teaching is an intellectual exercise that adds to subject-matter expertise.

This teaching-specific math knowledge, the researchers note, could be acquired in teacher preparation or professional development, however, it’s usually created on the job.

“Now, I’m much more apt to help them develop a deeper understanding of how people learn from a neuroscientific and cognitive-psychology perspective, and have them develop a model for how students learn.”

Erika Offerdahl, associate vp and director of the Transformational Change Initiative at WSU

From DSC:
I love this part too:

There’s a role here, too, for education researchers. Not every evidence-based teaching practice has been broken into its critical components in the literature,

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing — from chronicle.com by Ben Chrisinger (behind a paywall)
And there’s little agreement on the rules that should govern it.

Excerpt:

While we’ve been busy worrying about what ChatGPT could mean for students, we haven’t devoted nearly as much attention to what it could mean for academics themselves. And it could mean a lot. Critically, academics disagree on exactly how AI can and should be used. And with the rapidly improving technology at our doorstep, we have little time to deliberate.

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture.

 

Does ‘Flipped Learning’ Work? A New Analysis Dives Into the Research — from edsurge.com by Jeffrey R. Young

Excerpt:

The researchers do think that flipped learning has merit — if it is done carefully. They end their paper by presenting a model of flipped learning they refer to as “fail, flip, fix and feed,” which they say applies the most effective aspects they learned from their analysis. Basically they argue that students should be challenged with a problem even if they can’t properly solve it because they haven’t learned the material yet, and then the failure to solve it will motivate them to watch the lecture looking for the necessary information. Then classroom time can be used to fix student misconceptions, with a mix of a short lecture and student activities. Finally, instructors assess the student work and give feedback.

From DSC:
Interesting. I think their “fail, flip, fix and feed” method makes sense.

Also, I do think there’s merit in presenting information ahead of time so that students can *control the pace* of listening/processing/absorbing what’s being relayed. (This is especially helpful for native language differences.) If flipped learning would have been a part of my college experience, it would have freed me from just being a scribe. I could have tried to actually process the information while in class.

 

Why Studying Is So Hard, and What Teachers Can Do to Help — from edutopia.org by Laura McKenna
Beginning in the upper elementary grades, research-backed study skills should be woven into the curriculum, argues psychology professor Daniel Willingham in a new book.

Excerpt:

The additional context for Willingham’s new book is that students often don’t know the best methods to study for tests, master complex texts, or take productive notes, and it’s difficult to explain to them why they should take a different tack. In the book, Willingham debunks popular myths about the best study strategies, explains why they don’t work, and recommends effective strategies that are based on the latest research in cognitive science.

I recently spoke to him about why listening to lectures isn’t like watching a movie, how our self-monitoring of learning is often flawed and self-serving, and when it’s too late to start teaching students good study skills.

 

Take Your Words From Lecture to Page — from chronicle.com by Rachel Toor
What compelling lecturers do, and how their techniques can translate to good writing.

Excerpts:

Thing is, many of the moves that the best lecturers make on the stage can translate to the page and help you draw in readers. That is especially important in writing textbooks and other work for general readers. If you can bring the parts of yourself that work in the classroom to the prose, you will delight readers as much as you do your students.

Narrative can be key. Data and research aren’t enough in either the classroom or on the page. People like to be told stories. If you want to be persuasive in both realms, use narrative to make arguments. Don’t forget that much scholarly work is really a quest. What journey can you take a reader on?

It’s a performance on the page, too. A great lecture is a performance. So is great writing.

Raise real questions the reader will want answers to. 

 

ChatGPT can’t be credited as an author, says world’s largest academic publisher — from theverge.com by James Vincent; with thanks to Robert Gibson on LinkedIn for the resource
But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed.

Excerpt:

Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.


On somewhat-related notes:

Uplevel your prompt craft in ChatGPT with the CREATE framework — from edte.ch by Tom Barrett

Excerpt:

The acronym “CREATE” is a helpful guide for crafting high-quality prompts for AI tools. Each letter represents an important step in the process.

The first four CREA are all part of prompt writing, where TE, the final two are a cycle of reviewing and editing your prompts.

Let’s look at each in more detail, with some examples from ChatGPT to help.

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content — from wsj.com by Alexandra Bruell (behind paywall)
CEO Jonah Peretti intends for artificial intelligence to play a larger role in the company this year


 

Learn Smarter Podcast — from learnsmarterpodcast.com

Learn Smarter Podcast educates, encourages and expands understanding for parents of students with different learning profiles through growing awareness of educational therapy, individualized strategies, community support, coaching, and educational content.

Learn Smarter Podcast educates, encourages and expands understanding for parents of students with different learning profiles through growing awareness of educational therapy, individualized strategies, community support, coaching, and educational content.

Somewhat along these lines…for some other resources related to the science of learning, see cogx.info’s research database:

Scientific Literature Supporting COGx Programs
COGx programs involve translation of research from over 500 scientific sources. The scientific literature below is a subset of the literature we have used and organized by subject area to facilitate access. In addition, we have worked directly with some of the authors of the scientific literature to help us translate and co-create our programs. Many of the scientific papers cited below were written by COGx Academic Partners.

Topics include:

    • Information Processing
    • Executive Function
    • Long-Term Memory
    • Metacognition
    • Emotions & Engagement
    • Cognitive Diversity

Also see:

USEFUL LEARNING WITH EFRAT FURST (S3E10)  — from edcircuit.com with Efrat Furst, Tom Sherrington, and Emma Turner

Bringing the science of learning to teachers

 


 

What factors help active learning classrooms succeed? — from rtalbert.org Robert Talbert

Excerpt:

The idea that the space in which you do something, affects the thing you do is the basic premise behind active learning classrooms (ALCs).

The biggest message I get from this study is that in order to have success with active learning classrooms, you can’t just build them — they have to be introduced as part of an ecosystem that touches almost all parts of the daily function of a university: faculty teaching, faculty development and support, facilities, and the Registrar’s Office to name a few. Without that ecosystem before you build an ALC, it seems hard to have success with students after it’s built. You’re more likely to have an expensive showcase that looks good but ultimately does not fulfill its main purpose: Promoting and amplifying active learning, and moving the culture of a campus toward active engagement in the classroom.

From DSC:
Thank you Robert for your article/posting here! And thank you for being one of the few faculty members who:

  • Regularly share information out on LinkedIn, Twitter, and your blog (something that is all too rare for faculty members throughout higher education)
  • Took a sabbatical to go work at a company that designs and develops numerous options for implementing active learning setups throughout the worlds of higher education, K12 education, and the corporate world as well. You are taking your skills to help contribute to the corporate world, while learning things out in the corporate world, and then  taking these learnings back into the world of higher education.

This presupposes something controversial: That the institution will take a stand on the issue that there is a preferred way to teach, namely active learning, and that the institution will be moving toward making active learning the default pedagogy at the institution. Putting this stake in the ground, and then investing not only in facilities but in professional development and faculty incentives to make it happen, again calls for vigorous, sustained leadership — at the top, and especially by the teaching/learning center director.

Robert Talbert


 

From DSC:
A few items re: ChatGPT — with some items pro-chat and other items against the use of ChatGPT (or at least to limit its use).


How About We Put Learning at the Center? — from insidehighered.com by John Warner
The ongoing freak-out about ChatGPT sent me back to considering the fundamentals.

Excerpt:

So, when people express concern that students will use ChatGPT to complete their assignments, I understand the concern, but what I don’t understand is why this concern is so often channeled into discussions about how to police student behavior, rather than using this as an opportunity to exam the kind of work we actually ask students (and faculty) to do around learning.

If ChatGPT can do the things we ask students to do in order to demonstrate learning, it seems possible to me that those things should’ve been questioned a long time ago. It’s why I continue to believe this technology is an opportunity for reinvention, precisely because it is a threat to the status quo.

Top AI conference bans use of ChatGPT and AI language tools to write academic papers — from theverge.com by James Vincent; with thanks to Anna Mills for this resource
AI tools can be used to ‘edit’ and ‘polish’ authors’ work, say the conference organizers, but text ‘produced entirely’ by AI is not allowed. This raises the question: where do you draw the line between editing and writing?

Excerpt:

The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.” The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference’s organizers responded by publishing a longer statement explaining their thinking. (The ICML responded to requests from The Verge for comment by directing us to this same statement.)

How to… use AI to teach some of the hardest skills — from oneusefulthing.substack.com by Ethan Mollick
When errors, inaccuracies, and inconsistencies are actually very useful

Excerpt:

Instead, I want to discuss the opportunity provided by AI, because it can help us teach in new ways. The very things that make AI scary for educators — its tedency to make up facts, its lack of nuance, and its ability to make excellent student essays — can be used to make education better.

This isn’t for some future theoretical version of AI. You can create assignments, right now, using ChatGPT, that we will help stretch students in knew ways. We wrote a paper with the instructions. You can read it here, but I also want to summarize our suggestions. These are obviously not the only ways to use AI to educate, but they solve some of the hardest problems in education, and you can start experimenting with them right now.

NYC education department blocks ChatGPT on school devices, networks — from ny.chalkbeat.org by Michael Elsen-Rooney

Excerpt:

New York City students and teachers can no longer access ChatGPT — the new artificial intelligence-powered chatbot that generates stunningly cogent and lifelike writing — on education department devices or internet networks, agency officials confirmed Tuesday.

Teachers v ChatGPT: Schools face new challenge in fight against plagiarism — from straitstimes.com by Osmond Chia; with thanks to Stephen Downes for this resource

Excerpt:

SINGAPORE – Teachers in Singapore say they will likely have to move from assignments requiring regurgitation to those that require greater critical thinking, to stay ahead in the fight against plagiarism.

This comes on the back of the rise of ChatGPT, an intelligent chatbot that is able to spin essays and solve mathematical equations in seconds.

ChatGPT Is Not Ready to Teach Geometry (Yet) — from educationnext.org by Paul T. von Hippel
The viral chatbot is often wrong, but never in doubt. Educators need to tread carefully.

Excerpt:

Can ChatGPT provide feedback and answer questions about math in a more tailored and natural way? The answer, for the time being, is no. Although ChatGPT can talk about math superficially, it doesn’t “understand” math with real depth. It cannot correct mathematical misconceptions, it often introduces misconceptions of its own; and it sometimes makes inexplicable mathematical errors that a basic spreadsheet or hand calculator wouldn’t make.

Here, I’ll show you.


Addendum on 1/9/23:

9 ways ChatGPT saves me hours of work every day, and why you’ll never outcompete those who use AI effectively. — from .linkedin.com by Santiago Valdarrama

A list for those who write code:

  1. 1. Explaining code…
  2. Improve existing code…
  3. Rewriting code using the correct style…
  4. Rewriting code using idiomatic constructs…
  5. Simplifying code…
  6. Writing test cases…
  7. Exploring alternatives…
  8. Writing documentation…
  9. Tracking down bugs…
 

From DSC:
For those seeking a doctorate in education: Here’s a potential topic for your doctoral thesis.

Homelessness is a huge issue. It’s a complex issue, with many layers, variables, and causes to it. I once heard Oprah Winfrey say that we are all one to two steps away from being homeless, and I agree with that.

But as I was passing a homeless person asking for money on the exit ramp from a local highway the other day, I wondered what place, if any, education played (or didn’t play) in people’s lives. Was/is there any common denominator or set of experiences with their education that we can look at? If so, can we use design thinking to get at some of those root issues? For examples:

  • Was school easy for them? Hard for them?
  • A source of joy for them? A source of pain for them?
  • Were they engaged or disengaged?
  • Were they able to pursue their interests and passions?

It might turn out that education had little to do with things. It could have been health issues, broken relationships, systemic issues, the loss of a job, addictions, intergenerational “chains,” or many other things. 

But it’s worth someone researching this. Such studies and interviews could turn up some helpful directions and steps to take for our future.

#homelessness #society #education #passions #participation
#research #educationreform #K12 #lifelonglearning

 

10 Must Read Books for Learning Designers — from linkedin.com by Amit Garg

Excerpt:

From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment

Speaking of recommended books (but from a more technical perspective this time), also see:

10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team)
Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders

10 must-read tech books for 2023 -- from enterprisersproject.com by Katie Sanders

 

Can a Group of MIT Professors Turn a White Paper Into a New Kind of College? — from edsurge.com by Jeffrey R. Young

Excerpt:

A group of professors at Massachusetts Institute of Technology dropped a provocative white paper in September that proposed a new kind of college that would address some of the growing public skepticism of higher education. This week, they took the next step toward bringing their vision from idea to reality.

That next step was holding a virtual forum that brought together a who’s who of college innovation leaders, including presidents of experimental colleges, professors known for novel teaching practices and critical observers of the higher education space.

The MIT professors who authored the white paper tried to make clear that even though they’re from an elite university, they do not have all the answers. Their white paper takes pains to describe itself as a draft framework and to invite input from players across the education ecosystem so they can revise and improve the plan.

IDEAS FOR DESIGNING An Affordable New Educational Institution

IDEAS FOR DESIGNING An Affordable New Educational Institution

The goal of this document is simply to propose some principles and ideas that we hope will lay the groundwork for the future, for an education that will be both more affordable and more effective.

Promotions and titles will be much more closely tied to educational performance—quality, commitment, outcomes, and innovation—than to research outcomes. 

 

These are the most important AI trends, according to top AI experts — from nexxworks.com
Somewhat in the shadow of the (often) overhyped metaverse and Web3 paradigms, AI seems to be developing at great speed. That’s why we asked a group of top AI experts in our network to describe what they think are the most important trends, evolutions and areas of interest of the moment in that domain.

Excerpt:

All of them have different backgrounds and areas of expertise, but some patterns still emerged in their stories, several of them mentioning ethics, the impact on the climate (both positively and negatively), the danger of overhyping, the need for transparency and explainability, interdisciplinary collaborations, robots and the many challenges that still need to be overcome.

But let’s see what they have to say, shall we?

Also relevant/see:

AI IS REVOLUTIONIZING EVERY FIELD AND SCIENCE IS NO EXCEPTION — from dataconomy.com by KEREM GÜLEN

Table of Contents

  • Artificial intelligence in science
    • Artificial intelligence in science: Biology
    • Artificial intelligence in science: Physics
    • Artificial intelligence in science: Chemistry
  • AI in science and research
    • How is AI used in scientific research?
      • Protein structures can be predicted using genetic data
      • Recognizing how climate change affects cities and regions
      • Analyzing astronomical data
  • AI in science examples
    • Interpreting social history with archival data
    • Using satellite images to aid in conservation
    • Understanding complex organic chemistry
  • Conclusion

Also relevant/see:

  • How ‘Responsible AI’ Is Ethically Shaping Our Future — from learningsolutionsmag.com by Markus Bernhardt
    Excerpt:
    The PwC 2022 AI Business Survey finds that “AI success is becoming the rule, not the exception,” and, according to PwC US, published in the 2021 AI Predictions & 2021 Responsible AI Insights Report, “Responsible AI is the leading priority among industry leaders for AI applications in 2021, with emphasis on improving privacy, explainability, bias detection, and governance.”
  • Why you need an AI ethics committee — from enterprisersproject.com by Reid Blackman (requires providing email address to get the article)
 
© 2024 | Daniel Christian