The 2025 Changing Landscape of Online Education (CHLOE) 10 Report — from qualitymatters.org; emphasis below from DSC

Notable findings from the 73-page report include: 

  • Online Interest Surges Across Student Populations: 
  • Institutional Preparedness Falters Amid Rising Demand: Despite accelerating demand, institutional readiness has stagnated—or regressed—in key areas.
  • The Online Education Marketplace Is Increasingly Competitive: …
  • Alternative Credentials Take Center Stage: …
  • AI Integration Lacks Strategic Coordination: …

Just 28% of faculty are considered fully prepared for online course design, and 45% for teaching. Alarmingly, only 28% of institutions report having fully developed academic continuity plans for future emergency pivots to online.


Also relevant, see:


Great Expectations, Fragile Foundations — from onedtech.philhillaa.com by Glenda Morgan
Lessons about growth from the CHLOE & BOnES reports

Cultural resistance remains strong. Many [Chief Online Learning Officers] COLOs say faculty and deans still believe in-person learning is “just better,” creating headwinds even for modest online growth. As one respondent at a four-year institution with a large online presence put it:

Supportive departments [that] see the value in online may have very different levels of responsiveness compared to academic departments [that] are begrudgingly online. There is definitely a growing belief that students “should” be on-ground and are only choosing online because it’s easy/ convenient. Never mind the very real and growing population of nontraditional learners who can only take online classes, and the very real and growing population of traditional-aged learners who prefer online classes; many faculty/deans take a paternalistic, “we know what’s best” approach.


Ultimately, what we need is not just more ambition but better ambition. Ambition rooted in a realistic understanding of institutional capacity, a shared strategic vision, investments in policy and infrastructure, and a culture that supports online learning as a core part of the academic mission, not an auxiliary one. It’s time we talked about what it really takes to grow online learning , and where ambition needs to be matched by structure.

From DSC:
Yup. Culture is at the breakfast table again…boy, those strategies taste good.

I’d like to take some of this report — like the graphic below — and share it with former faculty members and members of a couple of my past job families’ leadership. They strongly didn’t agree with us when we tried to advocate for the development of online-based learning/programs at our organizations…but we were right. We were right all along. And we were LEADING all along. No doubt about it — even if the leadership at the time said that we weren’t leading.

The cultures of those organizations hurt us at the time. But our cultivating work eventually led to the development of online programs — unfortunately, after our groups were disbanded, they had to outsource those programs to OPMs.


Arizona State University — with its dramatic growth in online-based enrollments.

 
 
 

Thomson Reuters CEO: Legal Profession Faces “Biggest Disruption in Its History”from AI  — from lawnext.com by Bob Ambrogi

Thomson Reuters President and CEO Steve Hasker believes the legal profession is experiencing “the biggest disruption … in its history” due to generative and agentic artificial intelligence, fundamentally rewriting how legal work products are created for the first time in more than 300 years.

Speaking to legal technology reporters during ILTACON, the International Legal Technology Association’s annual conference, Hasker outlined his company’s ambitious goal to become “the most innovative company” in the legal tech sector while navigating what he described as unprecedented technological change affecting a profession that has remained largely unchanged since its origins in London tea houses centuries ago.


Legal tech hackathon challenges students to rethink access to justice — from the Centre for Innovation and Entrepreneurship, Auckland Law School
In a 24-hour sprint, student teams designed innovative tools to make legal and social support more accessible.

The winning team comprised of students of computer science, law, psychology and physics. They developed a privacy-first legal assistant powered by AI that helps people understand their legal rights without needing to navigate dense legal language. 


Teaching How To ‘Think Like a Lawyer’ Revisited — from abovethelaw.com by Stephen Embry
GenAI gives the concept of training law students to think like a lawyer a whole new meaning.

Law Schools
These insights have particular urgency for legal education. Indeed, most of Cowen’s criticisms and suggested changes need to be front and center for law school leaders. It’s naïve to think that law student and lawyers aren’t going to use GenAI tools in virtually every aspect of their professional and personal lives. Rather than avoiding the subject or worse yet trying to stop use of these tools, law schools should make GenAI tools a fundamental part of research, writing and drafting training.

They need to focus not on memorization but on the critical thinking skills beginning lawyers used to get in the on-the-job training guild type system. As I discussed, that training came from repetitive and often tedious work that developed experienced lawyers who could recognize patterns and solutions based on the exposure to similar situations. But much of that repetitive and tedious work may go away in a GenAI world.

The Role of Adjunct Professors
But to do this, law schools need to better partner with actual practicing lawyers who can serve as adjunct professors. Law schools need to do away with the notion that adjuncts are second-class teachers.


It’s a New Dawn In Legal Tech: From Woodstock to ILTACON (And Beyond) — from lawnext.com by Bob Ambrogi

As someone who has covered legal tech for 30 years, I cannot remember there ever being a time such as this, when the energy and excitement are raging, driven by generative AI and a new era of innovation and new ideas of the possible.

But this year was different. Wandering the exhibit hall, getting product briefings from vendors, talking with attendees, it was impossible to ignore the fundamental shift happening in legal technology. Gen AI isn’t just creating new products – it is spawning entirely new categories of products that truly are reshaping how legal work gets done.

Agentic AI is the buzzword of 2025 and agentic systems were everywhere at ILTACON, promising to streamline workflows across all areas of legal practice. But, perhaps more importantly, these tools are also beginning to address the business side of running a law practice – from client intake and billing to marketing and practice management. The scope of transformation is now beginning to extend beyond the practice of law into the business of law.

Largely missing from this gathering were solo practitioners, small firm lawyers, legal aid organizations, and access-to-justice advocates – the very people who stand to benefit most from the democratizing potential of AI.

However, now more than ever, the innovations we are seeing in legal tech have the power to level the playing field, to give smaller practitioners access to tools and capabilities that were once prohibitively expensive. If these technologies remain priced for and marketed primarily to Big Law, we will have succeeded only in widening the justice gap rather than closing it.


How AI is Transforming Deposition Review: A LegalTech Q&A — from jdsupra.com

Thanks to breakthroughs in artificial intelligence – particularly in semantic search, multimodal models, and natural language processing – new legaltech solutions are emerging to streamline and accelerate deposition review. What once took hours or days of manual analysis now can be accomplished in minutes, with greater accuracy and efficiency than possible with manual review.


From Skepticism to Trust: A Playbook for AI Change Management in Law Firms — from jdsupra.com by Scott Cohen

Historically, lawyers have been slow adopters of emerging technologies, and with good reason. Legal work is high stakes, deeply rooted in precedent, and built on individual judgment. AI, especially the new generation of agentic AI (systems that not only generate output but initiate tasks, make decisions, and operate semi-autonomously), represents a fundamental shift in how legal work gets done. This shift naturally leads to caution as it challenges long-held assumptions about lawyer workflows and several aspects of their role in the legal process.

The path forward is not to push harder or faster, but smarter. Firms need to take a structured approach that builds trust through transparency, context, training, and measurement of success. This article provides a five-part playbook for law firm leaders navigating AI change management, especially in environments where skepticism is high and reputational risk is even higher.


ILTACON 2025: The vendor briefings – Agents, ecosystems and the next stage of maturity — from legaltechnology.com by Caroline Hill

This year’s ILTACON in Washington was heavy on AI, but the conversation with vendors has shifted. Legal IT Insider’s briefings weren’t about potential use cases or speculative roadmaps. Instead, they focused on how AI is now being embedded into the tools lawyers use every day — and, crucially, how those tools are starting to talk to each other.

Taken together, they point to an inflection point, where agentic workflows, data integration, and open ecosystems define the agenda. But it’s important amidst the latest buzzwords to remember that agents are only as good as the tools they have to work with, and AI only as good as its underlying data. Also, as we talk about autonomous AI, end users are still struggling with cloud implementations and infrastructure challenges, and need vendors to be business partners that help them to make progress at speed.

Harvey’s roadmap is all about expanding its surface area — connecting to systems like iManage, LexisNexis, and more recently publishing giant Wolters Kluwer — so that a lawyer can issue a single query and get synthesised, contextualised answers directly within their workflow. Weinberg said: “What we’re trying to do is get all of the surface area of all of the context that a lawyer needs to complete a task and we’re expanding the product surface so you can enter a search, search all resources, and apply that to the document automatically.” 

The common thread: no one is talking about AI in isolation anymore. It’s about orchestration — pulling together multiple data sources into a workflow that actually reflects how lawyers practice. 


5 Pitfalls Of Delaying Automation In High-Volume Litigation And Claims — from jdsupra.com

Why You Can’t Afford to Wait to Adopt AI Tools that have Plaintiffs Moving Faster than Ever
Just as photocopiers shifted law firm operations in the early 1970s and cloud computing transformed legal document management in the early 2000s, AI automation tools are altering the current legal landscape—enabling litigation teams to instantly structure unstructured data, zero in on key arguments in seconds, and save hundreds (if not thousands) of hours of manual work.


Your Firm’s AI Policy Probably Sucks: Why Law Firms Need Education, Not Rules — from jdsupra.com

The Floor, Not the Ceiling
Smart firms need to flip their entire approach. Instead of dictating which AI tools lawyers must use, leadership should set a floor for acceptable use and then get out of the way.

The floor is simple: no free versions for client work. Free tools are free because users are the product. Client data becomes training data. Confidentiality gets compromised. The firm loses any ability to audit or control how information flows. This isn’t about control; it’s about professional responsibility.

But setting the floor is only the first step. Firms must provide paid, enterprise versions of AI tools that lawyers actually want to use. Not some expensive legal tech platform that promises AI features but delivers complicated workflows. Real AI tools. The same ones lawyers are already using secretly, but with enterprise security, data protection, and proper access controls.

Education must be practical and continuous. Single training sessions don’t work. AI tools evolve weekly. New capabilities emerge constantly. Lawyers need ongoing support to experiment, learn, and share discoveries. This means regular workshops, internal forums for sharing prompts and techniques, and recognition for innovative uses.

The education investment pays off immediately. Lawyers who understand AI use it more effectively. They catch its mistakes. They know when to verify outputs. They develop specialized prompts for legal work. They become force multipliers, not just for themselves but for their entire teams.

 

What next for EDI? Protecting equality of opportunity in HE — from timeshighereducation.com by Laura Duckett
As equity, diversity and inclusion practices face mounting political and cultural challenges, this guide includes strategies from academics around the world on preserving fair access and opportunity for all

As many in this guide explain, hostility to efforts to create fairer, inclusive and diverse institutions of higher education runs a lot deeper than the latest US presidential agenda and it cannot be ignored and rejected as a momentary political spike. Yet the continued need for EDI (or DEI as it is called in America) work to address historic and systemic injustice is clear from the data. In the US, Black, Hispanic, Latino, Native American and Pacific Islander people are under-represented in university student and staff populations. Students from these groups also have worse academic outcomes.

In the UK, only 1 per cent of professors are Black, women remain under-represented on the higher rungs of the academic ladder and the attainment gap between students from minoritised backgrounds and their white counterparts remains stubbornly evident across the higher education sector.

While not all EDI work has proved successful, significant progress has been made on widening participation in higher education and building more inclusive universities in which students and academics can thrive.

This guide shares lessons from academics on navigating increasingly choppy waters relating to EDI, addressing misconceptions about the work and its core ambitions, strategies for allyship, anti-racism and inclusion and how to champion EDI through your teaching and institutional culture.

 


Back to School in the AI Era: Why Students Are Rewriting Your Lesson Plan — from linkedin.com by Hailey Wilson

As a new academic year begins, many instructors, trainers, and program leaders are bracing for familiar challenges—keeping learners engaged, making complex material accessible, and preparing students for real-world application.

But there’s a quiet shift happening in classrooms and online courses everywhere.

This fall, it’s not the syllabus that’s guiding the learning experience—it’s the conversation between the learner and an AI tool.


From bootcamp to bust: How AI is upending the software development industry — from reuters.com by Anna Tong; via Paul Fain
Coding bootcamps have been a mainstay in Silicon Valley for more than a decade. Now, as AI eliminates the kind of entry-level roles for which they trained people, they’re disappearing.

Coding bootcamps have been a Silicon Valley mainstay for over a decade, offering an important pathway for non-traditional candidates to get six-figure engineering jobs. But coding bootcamp operators, students and investors tell Reuters that this path is rapidly disappearing, thanks in large part to AI.

“Coding bootcamps were already on their way out, but AI has been the nail in the coffin,” said Allison Baum Gates, a general partner at venture capital fund SemperVirens, who was an early employee at bootcamp pioneer General Assembly.

Gates said bootcamps were already in decline due to market saturation, evolving employer demand and market forces like growth in international hiring.

 

BREAKING: Google introduces Guided Learning — from aieducation.substack.com by Claire Zau
Some thoughts on what could make Google’s AI tutor stand out

Another major AI lab just launched “education mode.”

Google introduced Guided Learning in Gemini, transforming it into a personalized learning companion designed to help you move from quick answers to real understanding.

Instead of immediately spitting out solutions, it:

  • Asks probing, open-ended questions
  • Walks learners through step-by-step reasoning
  • Adapts explanations to the learner’s level
  • Uses visuals, videos, diagrams, and quizzes to reinforce concepts

This Socratic style tutor rollout follows closely behind similar announcements like OpenAI’s Study Mode (last week) and Anthropic’s Claude for Education (April 2025).


How Sci-Fi Taught Me to Embrace AI in My Classroom — from edsurge.com by Dan Clark

I’m not too naive to understand that, no matter how we present it, some students will always be tempted by “the dark side” of AI. What I also believe is that the future of AI in education is not decided. It will be decided by how we, as educators, embrace or demonize it in our classrooms.

My argument is that setting guidelines and talking to our students honestly about the pitfalls and amazing benefits that AI offers us as researchers and learners will define it for the coming generations.

Can AI be the next calculator? Something that, yes, changes the way we teach and learn, but not necessarily for the worse? If we want it to be, yes.

How it is used, and more importantly, how AI is perceived by our students, can be influenced by educators. We have to first learn how AI can be used as a force for good. If we continue to let the dominant voice be that AI is the Terminator of education and critical thinking, then that will be the fate we have made for ourselves.


AI Tools for Strategy and Research – GT #32 — from goodtools.substack.com by Robin Good
Getting expert advice, how to do deep research with AI, prompt strategy, comparing different AIs side-by-side, creating mini-apps and an AI Agent that can critically analyze any social media channel

In this issue, discover AI tools for:

  • Getting Expert Advice
  • Doing Deep Research with AI
  • Improving Your AI Prompt Strategy
  • Comparing Results from Different AIs
  • Creating an AI Agent for Social Media Analysis
  • Summarizing YouTube Videos
  • Creating Mini-Apps with AI
  • Tasting an Award-Winning AI Short Film

GPT-Building, Agentic Workflow Design & Intelligent Content Curation — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 3 recent job ads reveal about the changing nature of Instructional Design

In this week’s blog post, I’ll share my take on how the instructional design role is evolving and discuss what this means for our day-to-day work and the key skills it requires.

With this in mind, I’ve been keeping a close eye on open instructional design roles and, in the last 3 months, have noticed the emergence of a new flavour of instructional designer: the so-called “Generative AI Instructional Designer.”

Let’s deep dive into three explicitly AI-focused instructional design positions that have popped up in the last quarter. Each one illuminates a different aspect of how the role is changing—and together, they paint a picture of where our profession is likely heading.

Designers who evolve into prompt engineers, agent builders, and strategic AI advisors will capture the new premium. Those who cling to traditional tool-centric roles may find themselves increasingly sidelined—or automated out of relevance.


Google to Spend $1B on AI Training in Higher Ed — from insidehighered.com by Katherine Knott

Google’s parent company announced Wednesday (8/6/25) that it’s planning to spend $1 billion over the next three years to help colleges teach and train students about artificial intelligence.

Google is joining other AI companies, including OpenAI and Anthropic, in investing in AI training in higher education. All three companies have rolled out new tools aimed at supporting “deeper learning” among students and made their AI platforms available to certain students for free.


5 Predictions for How AI Will Impact Community Colleges — from pistis4edu.substack.com by Feng Hou

Based on current technology capabilities, adoption patterns, and the mission of community colleges, here are five well-supported predictions for AI’s impact in the coming years.

  1. Universal AI Tutor Access
  2. AI as Active Teacher
  3. Personalized Learning Pathways
  4. Interactive Multimodal Learning
  5. Value-Centric Education in an AI-Abundant World

 

AI and Higher Ed: An Impending Collapse — from insidehighered.com by Robert Niebuhr; via George Siemens; I also think George’s excerpt (see below) gets right to the point.
Universities’ rush to embrace AI will lead to an untenable outcome, Robert Niebuhr writes.

Herein lies the trap. If students learn how to use AI to complete assignments and faculty use AI to design courses, assignments, and grade student work, then what is the value of higher education? How long until people dismiss the degree as an absurdly overpriced piece of paper? How long until that trickles down and influences our economic and cultural output? Simply put, can we afford a scenario where students pretend to learn and we pretend to teach them?


This next report doesn’t look too good for traditional institutions of higher education either:


No Country for Young Grads — from burningglassinstitute.org

For the first time in modern history, a bachelor’s degree is no longer a reliable path to professional employment. Recent graduates face rising unemployment and widespread underemployment as structural—not cyclical—forces reshape entry?level work. This new report identifies four interlocking drivers: an AI?powered “Expertise Upheaval” eliminating many junior tasks, a post?pandemic shift to lean staffing and risk?averse hiring, AI acting as an accelerant to these changes, and a growing graduate glut. As a result, young degree holders are uniquely seeing their prospects deteriorate – even as the rest of the economy remain robust. Read the full report to explore the data behind these trends.

The above article was via Brandon Busteed on LinkedIn:

 
 
 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 

Navigating Career Transitions — from er.educause.edu by Jay James, Mike Richichi, Sarah Buszka, and Wes Johnson

In this episode, we hear from professionals at different stages of their career journeys as they reflect on risk, resilience, and growth. They share advice on stepping into leadership roles, recognizing when it may be time for a change, and overcoming imposter syndrome.

.


.

 

“The AI-enhanced learning ecosystem” [Jennings] + other items re: AI in our learning ecosystems

The AI-enhanced learning ecosystem: A case study in collaborative innovation — from chieflearningofficer.com by Kevin Jennings
How artificial intelligence can serve as a tool and collaborative partner in reimagining content development and management.

Learning and development professionals face unprecedented challenges in today’s rapidly evolving business landscape. According to LinkedIn’s 2025 Workplace Learning Report, 67 percent of L&D professionals report being “maxed out” on capacity, while 66 percent have experienced budget reductions in the past year.

Despite these constraints, 87 percent agree their organizations need to develop employees faster to keep pace with business demands. These statistics paint a clear picture of the pressure L&D teams face: do more, with less, faster.

This article explores how one L&D leader’s strategic partnership with artificial intelligence transformed these persistent challenges into opportunities, creating a responsive learning ecosystem that addresses the modern demands of rapid product evolution and diverse audience needs. With 71 percent of L&D professionals now identifying AI as a high or very high priority for their learning strategy, this case study demonstrates how AI can serve not merely as a tool but as a collaborative partner in reimagining content development and management.
.


How we use GenAI and AR to improve students’ design skills — from timeshighereducation.com by Antonio Juarez, Lesly Pliego and Jordi Rábago who are professors of architecture at Monterrey Institute of Technology in Mexico; Tomas Pachajoa is a professor of architecture at the El Bosque University in Colombia; & Carlos Hinrichsen and Marietta Castro are educators at San Sebastián University in Chile.
Guidance on using generative AI and augmented reality to enhance student creativity, spatial awareness and interdisciplinary collaboration

Blend traditional skills development with AI use
For subjects that require students to develop drawing and modelling skills, have students create initial design sketches or models manually to ensure they practise these skills. Then, introduce GenAI tools such as Midjourney, Leonardo AI and ChatGPT to help students explore new ideas based on their original concepts. Using AI at this stage broadens their creative horizons and introduces innovative perspectives, which are crucial in a rapidly evolving creative industry.

Provide step-by-step tutorials, including both written guides and video demonstrations, to illustrate how initial sketches can be effectively translated into AI-generated concepts. Offer example prompts to demonstrate diverse design possibilities and help students build confidence using GenAI.

Integrating generative AI and AR consistently enhanced student engagement, creativity and spatial understanding on our course. 


How Texas is Preparing Higher Education for AI — from the74million.org by Kate McGee
TX colleges are thinking about how to prepare students for a changing workforce and an already overburdened faculty for new challenges in classrooms.

“It doesn’t matter if you enter the health industry, banking, oil and gas, or national security enterprises like we have here in San Antonio,” Eighmy told The Texas Tribune. “Everybody’s asking for competency around AI.”

It’s one of the reasons the public university, which serves 34,000 students, announced earlier this year that it is creating a new college dedicated to AI, cyber security, computing and data science. The new college, which is still in the planning phase, would be one of the first of its kind in the country. UTSA wants to launch the new college by fall 2025.

But many state higher education leaders are thinking beyond that. As AI becomes a part of everyday life in new, unpredictable ways, universities across Texas and the country are also starting to consider how to ensure faculty are keeping up with the new technology and students are ready to use it when they enter the workforce.


In the Room Where It Happens: Generative AI Policy Creation in Higher Education — from er.educause.edu by Esther Brandon, Lance Eaton, Dana Gavin, and Allison Papini

To develop a robust policy for generative artificial intelligence use in higher education, institutional leaders must first create “a room” where diverse perspectives are welcome and included in the process.


Q&A: Artificial Intelligence in Education and What Lies Ahead — from usnews.com by Sarah Wood
Research indicates that AI is becoming an essential skill to learn for students to succeed in the workplace.

Q: How do you expect to see AI embraced more in the future in college and the workplace?
I do believe it’s going to become a permanent fixture for multiple reasons. I think the national security imperative associated with AI as a result of competing against other nations is going to drive a lot of energy and support for AI education. We also see shifts across every field and discipline regarding the usage of AI beyond college. We see this in a broad array of fields, including health care and the field of law. I think it’s here to stay and I think that means we’re going to see AI literacy being taught at most colleges and universities, and more faculty leveraging AI to help improve the quality of their instruction. I feel like we’re just at the beginning of a transition. In fact, I often describe our current moment as the ‘Ask Jeeves’ phase of the growth of AI. There’s a lot of change still ahead of us. AI, for better or worse, it’s here to stay.




AI-Generated Podcasts Outperform Textbooks in Landmark Education Study — form linkedin.com by David Borish

A new study from Drexel University and Google has demonstrated that AI-generated educational podcasts can significantly enhance both student engagement and learning outcomes compared to traditional textbooks. The research, involving 180 college students across the United States, represents one of the first systematic investigations into how artificial intelligence can transform educational content delivery in real-time.


What can we do about generative AI in our teaching?  — from linkedin.com by Kristina Peterson

So what can we do?

  • Interrogate the Process: We can ask ourselves if we I built in enough checkpoints. Steps that can’t be faked. Things like quick writes, question floods, in-person feedback, revision logs.
  • Reframe AI: We can let students use AI as a partner. We can show them how to prompt better, revise harder, and build from it rather than submit it. Show them the difference between using a tool and being used by one.
  • Design Assignments for Curiosity, Not Compliance: Even the best of our assignments need to adapt. Mine needs more checkpoints, more reflective questions along the way, more explanation of why my students made the choices they did.

Teachers Are Not OK — from 404media.co by Jason Koebler

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.

One thing is clear: teachers are not OK.

In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear “don’t use generative AI” from a prof but then log on to the university’s Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It’s inconsistent and confusing.

I am sick to my stomach as I write this because I’ve spent 20 years developing a pedagogy that’s about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It’s demoralizing.

 

May Brought Deep Cuts at Multiple Colleges — from insidehighered.com by  Josh Moody
Colleges laid off well over 800 employees last month due to a mix of enrollment challenges and state funding issues. Ivy Tech saw the deepest cuts with more than 200 jobs axed.

With the academic year coming to an end, multiple universities announced deep cuts in May, shedding dozens of jobs amid financial pressures often linked to enrollment shortfalls.

But the cuts below, for the most part, are not directly tied to the rapid-fire actions of the Trump administration but rather stem from other financial pressures weighing on the sector. Many of the institutions listed are contending with declining enrollment and, for public universities, shrinking state support, which has necessitated fiscal changes.

From DSC:
I survived several job reductions at one of my former workplaces. But I didn’t survive the one that laid off 12 staff members after the Spring 2017 Semester. So, more and more, faculty and staff have been starting to dread the end of the academic year — as they may not survive another round of cuts. 

 

Cultivating a responsible innovation mindset among future tech leaders — from timeshighereducation.com by Andreas Alexiou
The classroom is a perfect place to discuss the messy, real-world consequences of technological discoveries, writes Andreas Alexiou. Beyond ‘How?’, students should be asking ‘Should we…?’ and ‘What if…?’ questions around ethics and responsibility

University educators play a crucial role in guiding students to think about the next big invention and its implications for privacy, the environment and social equity. To truly make a difference, we need to bring ethics and responsibility into the classroom in a way that resonates with students. Here’s how.

Debating with industry pioneers on incorporating ethical frameworks in innovation, product development or technology adoption is eye-opening because it can lead to students confronting assumptions they hadn’t questioned before. For example, students could discuss the roll-out of emotion-recognition software. Many assume it’s neutral, but guest speakers from industry can highlight how cultural and racial biases are baked into design decisions.

Leveraging alumni networks and starting with short virtual Q&A sessions instead of full lectures can work well.


Are we overlooking the power of autonomy when it comes to motivating students? — from timeshighereducation.com by Danny Oppenheimer
Educators fear giving students too much choice in their learning will see them making the wrong decisions. But structuring choice without dictating the answers could be the way forward

So, how can we get students to make good decisions while still allowing them agency to make their own choices, maintaining the associated motivational advantages that agency provides? One possibility is to use choice architecture, more commonly called “nudges”: structuring choices in ways that scaffold better decisions without dictating them.

Higher education rightly emphasises the importance of belonging and mastery, but when it ignores autonomy – the third leg of the motivational tripod – the system wobbles. When we allow students to decide for themselves how they’ll engage with their coursework, they consistently rise to the occasion. They choose to challenge themselves, perform better academically and enjoy their education more.

 
© 2025 | Daniel Christian