Microcredentials Can Make a Huge Difference in Higher Education — from newthinking.com by Shannon Riggs
The Ecampus executive director of academic programs and learning innovation at Oregon State University believes that shorter form, low-cost courses can open up colleges to more people.

That so much student loan debt exists is a clear signal that higher education needs to innovate to reduce costs, increase access and improve students’ return on investment. Microcredentials are one way we can do this.


As the Supreme Court weighs Biden’s student loan forgiveness, education debt swells — from cnbc.com by Jessica Dickler

KEY POINTS

  • As the Supreme Court weighs President Joe Biden’s student loan forgiveness plan, college tuition keeps climbing.
  • This year’s incoming freshman class can expect to borrow as much as $37,000 to help cover the cost of a bachelor’s degree, according to a recent report.

College is only getting more expensive. Tuition and fees plus room and board at four-year, in-state public colleges rose more than 2% to $23,250, on average, in the 2022-23 academic year; at four-year private colleges, it increased by more than 3% to $53,430, according to the College Board, which tracks trends in college pricing and student aid.

Many students now borrow to cover the tab, which has already propelled collective student loan debt in the U.S. past $1.7 trillion.


Access, Outcomes, and Value: Envisioning the Future of Higher Education — from milkeninstitute.org with Jeff Selingo, Gene Block, Jim Gash, Eric Gertler, and Nicole Hurd

Leaders of colleges and universities face unprecedented challenges today. Tuition has more than doubled over the past two decades as state and federal funding has decreased. Renewed debates about affirmative action and legacy admissions are roiling many campuses and confusing students about what it takes to get accepted. Growing numbers of administrators are matched by declining student enrollment, placing new financial pressures on institutions of higher learning. And many prospective students and their parents are losing faith in the ROI of such an expensive investment and asking the simple question: Is it all worth it? Join distinguished leaders from public and private institutions for this panel discussion on how they are navigating these shifts and how they see the future of higher education.

 


What the New ‘U.S. News’ Law-School Rankings Reveal About the Rankings Enterprise — from chronicle.com by Francie Diep

Excerpt (emphasis DSC):

This year’s lists also offer a hint of how widespread the rankings revolt was. Seventeen medical schools and 62 law schools — nearly a third of the law schools U.S. News ranks — didn’t turn in data to the magazine this year. (It’s not clear what nonparticipation rates have been in the past. Reached by email to request historical context, a spokesperson for U.S. News pointed to webpages that are no longer online. U.S. News ranked law and medical schools that didn’t cooperate this year by using publicly available and past survey data.)


Are today’s students getting ahead, getting by, or even falling behind when it comes to their post-college earnings? The Equitable Value Explorer, an innovative diagnostic tool that puts the commission’s work into action, is helping to answer that question.


Report: Many borrowers who could benefit from income-driven repayment don’t know about it — from highereddive.com by Laura Spitalniak

Dive Brief:

  • Student loan borrowers who would stand to benefit the most from income-driven repayment plans, or IDRs, are less likely to know about them, according to a new report from left-leaning think tank New America.
  • Around 2 in 5 student-debt holders earning less than $30,000 a year reported being unfamiliar with the repayment plans. Under a proposed plan from the U.S. Education Department, IDR minimum monthly loan payments for low-income earners, such as this group, could drop to $0.
  • Just under half of borrowers in default had not heard of IDRs, despite the plans offering a pathway to becoming current on their loans, the report said. Only one-third of currently defaulted borrowers had ever enrolled in IDR.

Addendum on 5/16/23:

 

OPINION: Post pandemic, it’s time for a bold overhaul of U.S. public education, starting now — from hechingerreport.org by William Hite and Kirsten Baesler
Personalized learning can restore public faith and meet the diverse needs of our nation’s students

Excerpt:

Across all socioeconomic and racial groups, Americans want an education system that goes beyond college preparation and delivers practical skills for every learner, based on their own needs, goals and vision for the future.

We believe that this can be achieved by making the future of learning more personalized, focused on the needs of individual learners, with success measured by progress and proficiency instead of point-in-time test scores.

Change is hard, but we expect our students to take risks and fail every day. We should ask no less of ourselves.

 
 
 

Teaching: What You Can Learn From Students About ChatGPT — from chronicle.com by Beth McMurtrie

Excerpts (emphasis DSC):

Like a lot of you, I have been wondering how students are reacting to the rapid launch of generative AI tools. And I wanted to point you to creative ways in which professors and teaching experts have helped involve them in research and policymaking.

At Kalamazoo College, Autumn Hostetter, a psychology professor, and six of her students surveyed faculty members and students to determine whether they could detect an AI-written essay, and what they thought of the ethics of using various AI tools in writing. You can read their research paper here.

Next, participants were asked about a range of scenarios, such as using Grammarly, using AI to make an outline for a paper, using AI to write a section of a paper, looking up a concept on Google and copying it directly into a paper, and using AI to write an entire paper. As expected, commonly used tools like Grammarly were considered the most ethical, while writing a paper entirely with AI was considered the least. But researchers found variation in how people approached the in-between scenarios. Perhaps most interesting: Students and faculty members shared very similar views with each scenario.

 


Also relevant/see:

This Was Written By a Human: A Real Educator’s Thoughts on Teaching in the Age of ChatGPT — from er.educause.edu educause.org by Jered Borup
The well-founded concerns surrounding ChatGPT shouldn’t distract us from considering how it might be useful.


 

The above Tweet links to:

Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.



However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)


In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.


 

ChatGPT: Student insights are necessary to help universities plan for the future — from theconversation.com by Alpha Abebe and Fenella Amarasinghe

Excerpt:

In the race to get ahead of new technologies, are we forgetting about the perspectives of the most important stakeholders within our post-secondary institutions: the students?

Leaving students out of early discussions and decision-making processes is almost always a recipe for ill-fitting, ineffective and/or damaging approaches. The mantra “nothing for us without us” comes to mind here.

 


Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Challenging ‘Bad’ Online Policies and Attitudes — from insidehighered.com by Susan D’Agostino
Academic and industry leaders spoke with conviction at the SXSW EDU conference this week about approaches that impede educational access to motivated, capable learners.

Excerpts:

“It’s driven by artificial intelligence,” Barnes said of IBM’s training and reskilling effort. “It’s a Netflix-like interface that pushes content. Or an employee can select content…

The leaders discussed the ways in which colleges, policymakers, and employers might work together to help more Americans find or advance in viable employment, while also addressing the workforce skills gap. But some “bad” policies and attitudes about online learning undermine their efforts to work together, expand access and deliver outcomes to motivated, capable learners.

“Employers were saying, ‘We have job openings we can’t fill, and we want to work with the education system, but it is so unbelievably frustrating because they’re very rigid, and they don’t want to customize to our needs,’” Hansen said. These employers sought workforce training that could produce a pipeline of learners-turned-employees, and Hansen said they told him, “If you can do that, I’ll pay you.”

 

Fostering sustainable learning ecosystems — from linkedin.com by Patrick Blessinger

Excerpt (emphasis DSC):

Learning ecosystems
As today’s global knowledge society becomes increasingly interconnected and begins to morph into a global learning society, it is likely that formal, nonformal, and informal learning will become increasingly interconnected. For instance, there has been an explosion of new self-directed e-learning platforms such as Khan Academy, Open Courseware, and YouTube, among others, that help educate billions of people around the world.

A learning ecosystem includes all the elements that contribute to a learner’s overall learning experience. The components of a learning ecosystem are numerous, including people, technology platforms, knowledge bases, culture, governance, strategy, and other internal and external elements that have an impact on learning. Therefore, moving forward, it is crucial to integrate learning across formal, nonformal, and informal learning processes and activities in a more strategic way.

Learning ecosystems -- formal, informal, and nonformal sources of learning will become more tightly integrated in the future

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 

A quick and sobering guide to cloning yourself — from oneusefulthing.substack.com by Professor Ethan Mollick
It took me a few minutes to create a fake me giving a fake lecture.

Excerpt:

I think a lot of people do not realize how rapidly the multiple strands of generative AI (audio, text, images, and video) are advancing, and what that means for the future.

With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I’ve tried it myself, and the results are mind-blowing, even if they’re not completely convincing. Just a few months ago, this was impossible. Now, it’s a reality.

To start, you should probably watch the short video of Virtual Me and Real Me giving the same talk about entrepreneurship. Nothing about the Virtual Me part of the video is real, even the script was completely AI-generated.

.


From DSC:
Also, I wanted to post the resource below just because I think it’s an excellent question!

If ChatGPT Can Disrupt Google In 2023, What About Your Company? — from forbes.com by Glenn Gow

Excerpts:

Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company.

Professions that will be disrupted by generative AI include marketing, copywriting, illustration and design, sales, customer support, software coding, video editing, film-making, 3D modeling, architecture, engineering, gaming, music production, legal contracts, and even scientific research. Software applications will soon emerge that will make it easy and intuitive for anyone to use generative AI for those fields and more.
.


 

Virtual Exam Case Primes Privacy Fight Over College Room Scans — from news.bloomberglaw.com by Skye Witley

  • Cleveland State case centers on remote proctoring software
  • Fourth Amendment protections in question before Sixth Circuit

Excerpt:

A legal dispute over a university’s use of exam proctoring software that allegedly scanned students’ rooms is set to shape the scope of Fourth Amendment and privacy protections for online college tests.

Cleveland State University last week asked a federal appeals court in Cincinnati to review a district court finding that the “room scans” were unconstitutional searches. The case could influence how other students litigate their privacy rights and change how universities virtually monitor their students during exams, attorneys said.

 

ChatGPT can’t be credited as an author, says world’s largest academic publisher — from theverge.com by James Vincent; with thanks to Robert Gibson on LinkedIn for the resource
But Springer Nature, which publishes thousands of scientific journals, says it has no problem with AI being used to help write research — as long as its use is properly disclosed.

Excerpt:

Springer Nature, the world’s largest academic publisher, has clarified its policies on the use of AI writing tools in scientific papers. The company announced this week that software like ChatGPT can’t be credited as an author in papers published in its thousands of journals. However, Springer says it has no problem with scientists using AI to help write or generate ideas for research, as long as this contribution is properly disclosed by the authors.


On somewhat-related notes:

Uplevel your prompt craft in ChatGPT with the CREATE framework — from edte.ch by Tom Barrett

Excerpt:

The acronym “CREATE” is a helpful guide for crafting high-quality prompts for AI tools. Each letter represents an important step in the process.

The first four CREA are all part of prompt writing, where TE, the final two are a cycle of reviewing and editing your prompts.

Let’s look at each in more detail, with some examples from ChatGPT to help.

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content — from wsj.com by Alexandra Bruell (behind paywall)
CEO Jonah Peretti intends for artificial intelligence to play a larger role in the company this year


 

ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation — from wsj.com by Berber Jin and Miles Kruppa
Tender offer at that valuation would make OpenAI one of the most valuable U.S. startups

Here’s how Microsoft could use ChatGPT — from The Algorithm by Melissa Heikkilä

Excerpt (emphasis DSC):

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.

And speaking of Microsoft and AI, also see:

I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.

Donald Clark

The Expanding Dark Forest and Generative AI — from maggieappleton.com by
Proving you’re a human on a web flooded with generative AI content

Assumed audience:

People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.

That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.

 

DeepMind CEO Demis Hassabis Urges Caution on AI — from time.com by Billy Perrigo

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.

Demis Hassabis 

Excerpt (emphasis DSC):

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.

AI-assisted plagiarism? ChatGPT bot says it has an answer for that — from theguardian.com by Alex Hern
Silicon Valley firm insists its new text generator, which writes human-sounding essays, can overcome fears over cheating

Excerpt:

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

Schools Shouldn’t Ban Access to ChatGPT — from time.com by Joanne Lipman and Rebecca Distler

Excerpt (emphasis DSC):

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz
Are fears about generative AI warranted?

And see Joshua Kim’s A Friendly Attempt to Balance Steve Mintz’s Piece on Higher Ed Hard Truths out at nsidehighered.com | Comparing the health care and higher ed systems.

 



What Leaders Should Know About Emerging Technologies — from forbes.com by Benjamin Laker

Excerpt (emphasis DSC):

The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,” concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”

I asked ChatGPT to write my cover letters. 2 hiring managers said they would have given me an interview but the letters lacked personality. — from businessinsider.com by Beatrice Nolan

Key points:

  • An updated version of the AI chatbot ChatGPT was recently released to the public.
  • I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
  • The managers said they would’ve given me a call but that the letters lacked personality.

.



 
© 2025 | Daniel Christian