What 3 credit ratings agencies forecast for higher ed in 2026 — from highereddive.com by Ben Unglesbee
Fitch Ratings, S&P Global and Moody’s Ratings all predicted a tough year ahead, pointing to deteriorating financial conditions and heightened uncertainty.

Fitch Ratings labeled its higher ed financial outlook for 2026 as “deteriorating” while Moody’s Ratings described an “increasingly difficult and shifting operating environment for colleges and universities.” Similarly, S&P Global Ratings said it expects“mounting operating pressures and uncertainty” ahead for the sector’s nonprofit institutions.

Analysts cited additional disruption and belt-tightening ahead in the new year, from predicted demographic declines to pressures on international enrollment to uncertainties about how Republicans’ big spending bill passed this summer will impact demand for college.

Below are the various takes on higher ed in 2026 by Moody’s, Fitch and S&P Global Ratings:

 

At CES 2026, Everything Is AI. What Matters Is How You Use It — from wired.com by Boone Ashworth
Integrated chatbots and built-in machine intelligence are no longer standout features in consumer tech. If companies want to win in the AI era, they’ve got to hone the user experience.

Beyond Wearables
Right now, AI is on your face and arms—smart glasses and smart watches—but this year will see it proliferate further into products like earbuds, headphones, and smart clothing.

Health tech will see an influx of AI features too, as companies aim to use AI to monitor biometric data from wearables like rings and wristbands. Heath sensors will also continue to show up in newer places like toilets, bath mats, and brassieres.

The smart home will continue to be bolstered by machine intelligence, with more products that can listen, see, and understand what’s happening in your living space. Familiar candidates for AI-powered upgrades like smart vacuums and security cameras will be joined by surprising AI bedfellows like refrigerators and garage door openers.


Along these lines, see
live updates from CNET here.


ChatGPT is overrated. Here’s what to use instead. — from washingtonpost.com by Geoffrey A. Fowler
When I want help from AI, ChatGPT is no longer my default first stop.

I can tell you which AI tools are worth using — and which to avoid — because I’ve been running a chatbot fight club.

I conducted dozens of bot challenges based on real things people do with AI, including writing breakup texts and work emailsdecoding legal contracts and scientific researchanswering tricky research questions, and editing photos and making “art.” Human experts including best-selling authors, reference librarians, a renowned scientist and even a Pulitzer Prize-winning photographer judged the results.

After a year of bot battles, one thing stands out: There is no single best AI. The smartest way to use chatbots today is to pick different tools for different jobs — and not assume one bot can do it all.


How Collaborative AI Agents Are Shaping the Future of Autonomous IT — from aijourn.com by Michael Nappi

Some enterprise platforms now support cross-agent communication and integration with ecosystems maintained by companies like Microsoft, NVIDIA, Google, and Oracle. These cross-platform data fabrics break down silos and turn isolated AI pilots into enterprise-wide services. The result is an IT backbone that not only automates but also collaborates for continuous learning, diagnostics, and system optimization in real time.


Nvidia dominated the headlines in 2025 — these were its 15 biggest events of the year — from finance.yahoo.com by Daniel Howley

It’s difficult to think of any single company that had a bigger impact on Wall Street and the AI trade in 2025 than Nvidia (NVDA).

Nvidia’s revenue soared in 2025, bringing in $187.1 billion, and its market capitalization continued to climb, briefly eclipsing the $5 trillion mark before settling back in the $4 trillion range.

There were plenty of major highs and deep lows throughout the year, but these 15 were among the biggest moments of Nvidia’s 2025.


 

 

How Your Learners *Actually* Learn with AI — from drphilippahardman.substack.com by Dr. Philippa Hardman
What 37.5 million AI chats show us about how learners use AI at the end of 2025 — and what this means for how we design & deliver learning experiences in 2026

Last week, Microsoft released a similar analysis of a whopping 37.5 million Copilot conversations. These conversation took place on the platform from January to September 2025, providing us with a window into if and how AI use in general — and AI use among learners specifically – has evolved in 2025.

Microsoft’s mass behavioural data gives us a detailed, global glimpse into what learners are actually doing across devices, times of day and contexts. The picture that emerges is pretty clear and largely consistent with what OpenAI’s told us back in the summer:

AI isn’t functioning primarily as an “answers machine”: the majority of us use AI as a tool to personalise and differentiate generic learning experiences and – ultimately – to augment human learning.

Let’s dive in!

Learners don’t “decide” to use AI anymore. They assume it’s there, like search, like spellcheck, like calculators. The question has shifted from “should I use this?” to “how do I use this effectively?”


8 AI Agents Every HR Leader Needs To Know In 2026 — from forbes.com by Bernard Marr

So where do you start? There are many agentic tools and platforms for AI tasks on the market, and the most effective approach is to focus on practical, high-impact workflows. So here, I’ll look at some of the most compelling use cases, as well as provide an overview of the tools that can help you quickly deliver tangible wins.

Some of the strongest opportunities in HR include:

  • Workforce management, administering job satisfaction surveys, monitoring and tracking performance targets, scheduling interventions, and managing staff benefits, medical leave, and holiday entitlement.
  • Recruitment screening, automatically generating and posting job descriptions, filtering candidates, ranking applicants against defined criteria, identifying the strongest matches, and scheduling interviews.
  • Employee onboarding, issuing new hires with contracts and paperwork, guiding them to onboarding and training resources, tracking compliance and completion rates, answering routine enquiries, and escalating complex cases to human HR specialists.
  • Training and development, identifying skills gaps, providing self-service access to upskilling and reskilling opportunities, creating personalized learning pathways aligned with roles and career goals, and tracking progress toward completion.

 

 
 

AI working competency is now a graduation requirement at Purdue [Pacton] + other items re: AI in our learning ecosystems


AI Has Landed in Education: Now What? — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s what’s shaped the AI-education landscape in the last month:

  • The AI Speed Trap is [still] here: AI adoption in L&D is basically won (87%)—but it’s being used to ship faster, not learn better (84% prioritising speed), scaling “more of the same” at pace.
  • AI tutors risk a “pedagogy of passivity”: emerging evidence suggests tutoring bots can reduce cognitive friction and pull learners down the ICAP spectrum—away from interactive/constructive learning toward efficient consumption.
  • Singapore + India are building what the West lacks: they’re treating AI as national learning infrastructure—for resilience (Singapore) and access + language inclusion (India)—while Western systems remain fragmented and reactive.
  • Agentic AI is the next pivot: early signs show a shift from AI as a content engine to AI as a learning partner—with UConn using agents to remove barriers so learners can participate more fully in shared learning.
  • Moodle’s AI stance sends two big signals: the traditional learning ecosystem in fragmenting, and the concept of “user sovereignty” over by AI is emerging.

Four strategies for implementing custom AIs that help students learn, not outsource — from educational-innovation.sydney.edu.au by Kria Coleman, Matthew Clemson, Laura Crocco and Samantha Clarke; via Derek Bruff

For Cogniti to be taken seriously, it needs to be woven into the structure of your unit and its delivery, both in class and on Canvas, rather than left on the side. This article shares practical strategies for implementing Cogniti in your teaching so that students:

  • understand the context and purpose of the agent,
  • know how to interact with it effectively,
  • perceive its value as a learning tool over any other available AI chatbots, and
  • engage in reflection and feedback.

In this post, we discuss how to introduce and integrate Cogniti agents into the learning environment so students understand their context, interact effectively, and see their value as customised learning companions.

In this post, we share four strategies to help introduce and integrate Cogniti in your teaching so that students understand their context, interact effectively, and see their value as customised learning companions.


Collection: Teaching with Custom AI Chatbots — from teaching.virginia.edu; via Derek Bruff
The default behaviors of popular AI chatbots don’t always align with our teaching goals. This collection explores approaches to designing AI chatbots for particular pedagogical purposes.

Example/excerpt:



 

7 Legal Tech Trends That Will Reshape Every Business In 2026 — from forbes.com by Bernard Marr

Here are the trends that will matter most.

  1. AI Agents As Legal Assistants
  2. AI As A Driver Of Business Strategy
  3. Automation In Judicial Administration
  4. Always-On Compliance Monitoring
  5. Cybersecurity As An Essential Survival Tool
  6. Predictive Litigation
  7. Compliance As Part Of The Everyday Automation Fabric

According to the Thomson Reuters Future Of Professionals report, most experts already expect AI to transform their work within five years, with many viewing it as a positive force. The challenge now is clear: legal and compliance leaders must understand the tools reshaping their field and prepare their teams for a very different way of working in 2026.


Addendum on 12/17/25:

 

Beyond Infographics: How to Use Nano Banana to *Actually* Support Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Six evidence-based use cases to try in Google’s latest image-generating AI tool

While it’s true that Nano Banana generates better infographics than other AI models, the conversation has so far massively under-sold what’s actually different and valuable about this tool for those of us who design learning experiences.

What this means for our workflow:

Instead of the traditional “commission ? wait ? tweak ? approve ? repeat” cycle, Nano Banana enables an iterative, rapid-cycle design process where you can:

  • Sketch an idea and see it refined in minutes.
  • Test multiple visual metaphors for the same concept without re-briefing a designer.
  • Build 10-image storyboards with perfect consistency by specifying the constraints once, not manually editing each frame.
  • Implement evidence-based strategies (contrasting cases, worked examples, observational learning) that are usually too labour-intensive to produce at scale.

This shift—from “image generation as decoration” to “image generation as instructional scaffolding”—is what makes Nano Banana uniquely useful for the 10 evidence-based strategies below.

 


 


 

4 Simple & Easy Ways to Use AI to Differentiate Instruction — from mindfulaiedu.substack.com (Mindful AI for Education) by Dani Kachorsky, PhD
Designing for All Learners with AI and Universal Design Learning

So this year, I’ve been exploring new ways that AI can help support students with disabilities—students on IEPs, learning plans, or 504s—and, honestly, it’s changing the way I think about differentiation in general.

As a quick note, a lot of what I’m finding applies just as well to English language learners or really to any students. One of the big ideas behind Universal Design for Learning (UDL) is that accommodations and strategies designed for students with disabilities are often just good teaching practices. When we plan instruction that’s accessible to the widest possible range of learners, everyone benefits. For example, UDL encourages explaining things in multiple modes—written, visual, auditory, kinesthetic—because people access information differently. I hear students say they’re “visual learners,” but I think everyone is a visual learner, and an auditory learner, and a kinesthetic learner. The more ways we present information, the more likely it is to stick.

So, with that in mind, here are four ways I’ve been using AI to differentiate instruction for students with disabilities (and, really, everyone else too):


The Periodic Table of AI Tools In Education To Try Today — from ictevangelist.com by Mark Anderson

What I’ve tried to do is bring together genuinely useful AI tools that I know are already making a difference.

For colleagues wanting to explore further, I’m sharing the list exactly as it appears in the table, including website links, grouped by category below. Please do check it out, as along with links to all of the resources, I’ve also written a brief summary explaining what each of the different tools do and how they can help.





Seven Hard-Won Lessons from Building AI Learning Tools — from linkedin.com by Louise Worgan

Last week, I wrapped up Dr Philippa Hardman’s intensive bootcamp on AI in learning design. Four conversations, countless iterations, and more than a few humbling moments later – here’s what I am left thinking about.


Finally Catching Up to the New Models — from michellekassorla.substack.com by Michelle Kassorla
There are some amazing things happening out there!

An aside: Google is working on a new vision for textbooks that can be easily differentiated based on the beautiful success for NotebookLM. You can get on the waiting list for that tool by going to LearnYourWay.withgoogle.com.

Nano Banana Pro
Sticking with the Google tools for now, Nano Banana Pro (which you can use for free on Google’s AI Studio), is doing something that everyone has been waiting a long time for: it adds correct text to images.


Introducing AI assistants with memory — from perplexity.ai

The simple act of remembering is the crux of how we navigate the world: it shapes our experiences, informs our decisions, and helps us anticipate what comes next. For AI agents like Comet Assistant, that continuity leads to a more powerful, personalized experience.

Today we are announcing new personalization features to remember your preferences, interests, and conversations. Perplexity now synthesizes them automatically like memory, for valuable context on relevant tasks. Answers are smarter, faster, and more personalized, no matter how you work.

From DSC :
This should be important as we look at learning-related applications for AI.


For the last three days, my Substack has been in the top “Rising in Education” list. I realize this is based on a hugely flawed metric, but it still feels good. ?

– Michael G Wagner

Read on Substack


I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse. — from nytimes.com by Carlo Rotella [this should be a gifted article]
My students’ easy access to chatbots forced me to make humanities instruction even more human.


 

 

AI’s Role in Online Learning > Take It or Leave It with Michelle Beavers, Leo Lo, and Sara McClellan — from intentionalteaching.buzzsprout.com by Derek Bruff

You’ll hear me briefly describe five recent op-eds on teaching and learning in higher ed. For each op-ed, I’ll ask each of our panelists if they “take it,” that is, generally agree with the main thesis of the essay, or “leave it.” This is an artificial binary that I’ve found to generate rich discussion of the issues at hand.




 

New Study: Business As Usual Could Doom Dozens Of New England Colleges — from forbes.com by Michael B. Horn

The cause of the challenges isn’t one single factor, but a series of pressures from demographic changes, shifts in the public’s perception of higher education’s value, rising operating costs, emerging alternatives to traditional colleges, and, of late, changes in federal policies and programs. The net effect is that many institutions are much closer to the brink of closure than ever before.

What’s daunting is that flat enrollment is almost certainly an overly optimistic scenario.

If enrollment at the 44 schools falls by 15 percent over the next four years and business proceeds as usual, then 28 of the schools will have less than 10 years of cash and unrestricted quasi-endowments before they would become insolvent—assuming no major cuts, additional philanthropy, new debt, or asset sales. Fourteen would have less than five years before insolvency.

Also see:

From DSC:
The cultures at many institutions of traditional higher education will make some of the necessary changes and strategies (that Michael and Steven discuss) very hard to make. For example, to merge with another institution or institutions. Such a strategy could be very challenging to implement, even as alternatives continue to emerge.

 


Three Years from GPT-3 to Gemini 3 — from oneusefulthing.org by Ethan Mollick
From chatbots to agents

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.




Results May Vary — from aiedusimplified.substack.com by Lance Eaton, PhD
On Custom Instructions with GenAI Tools….

I’m sharing today about custom instructions and my use of them across several AI tools (paid versions of ChatGPT, Gemini, and Claude). I want to highlight what I’m doing, how it’s going, and solicit from readers to share in the comments some of their custom instructions that they find helpful.

I’ve been in a few conversations lately that remind me that not everyone knows about them, even some of the seasoned folks around GenAI and how you might set them up to better support your work. And, of course, they are, like all things GenAI, highly imperfect!

I’ll include and discuss each one below, but if you want to keep abreast of my custom instructions, I’ll be placing them here as I adjust and update them so folks can see the changes over time.

 

Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 

Enrollment Growth Continues, Bolstered by Short-Term Credentials — from insidehighered.com by Johanna Alonso
Enrollment is up across the board this fall, except for graduate student enrollment, which remained stagnant. The biggest increase was among those pursuing short-term credentials, followed by those earning associate degrees.

College enrollment continued to grow this fall, increasing by 2 percent compared to fall 2024, according to preliminary fall data released by the National Student Clearinghouse Research Center.

The biggest gains came from students studying for short-term credentials, whose ranks increased 6.6 percent, while the number of students enrolled in associate and bachelor’s degree programs rose 3.1 percent and 1.2 percent, respectively. Enrollment also grew faster at community colleges, which experienced a 4 percent increase, than at public (1.9 percent) and private (0.9 percent) four-year institutions.

Total graduate enrollment was stagnant, however, and the number of master’s students actually decreased by 0.6 percent.


Speaking of higher education, also see:

OPINION: Too many college graduates are stranded before their careers can even begin. We can’t let that happen — from hechingerreport.org by Bruno V. Manno

This fall, some 19 million undergraduates returned to U.S. campuses with a long-held expectation: Graduate, land an entry-level job, climb the career ladder. That formula is breaking down.

Once reliable gateway jobs for college graduates in industries like finance, consulting and journalism have tightened requirements. Many entry-level job postings that previously provided initial working experience for college graduates now require two to three years of prior experience, while AI, a recent analysis concluded, “snaps up good entry-level tasks,” especially routine work like drafting memos, preparing spreadsheets and summarizing research.

Without these proving grounds, new hires lose chances to build skills by doing. And the demand for work experience that potential workers don’t have creates an experience gap for new job seekers. Once stepping-stones, entry-level positions increasingly resemble mid-career jobs.


 


Gen AI Is Going Mainstream: Here’s What’s Coming Next — from joshbersin.com by Josh Bersin

I just completed nearly 60,000 miles of travel across Europe, Asia, and the Middle East meeting with hundred of companies to discuss their AI strategies. While every company’s maturity is different, one thing is clear: AI as a business tool has arrived: it’s real and the use-cases are growing.

A new survey by Wharton shows that 46% of business leaders use Gen AI daily and 80% use it weekly. And among these users, 72% are measuring ROI and 74% report a positive return. HR, by the way, is the #3 department in use cases, only slightly behind IT and Finance.

What are companies getting out of all this? Productivity. The #1 use case, by far, is what we call “stage 1” usage – individual productivity. 

.


From DSC:
Josh writes: “Many of our large clients are now implementing AI-native learning systems and seeing 30-40% reduction in staff with vast improvements in workforce enablement.

While I get the appeal (and ROI) from management’s and shareholders’ perspective, this represents a growing concern for employment and people’s ability to earn a living. 

And while I highly respect Josh and his work through the years, I disagree that we’re over the problems with AI and how people are using it: 

Two years ago the NYT was trying to frighten us with stories of AI acting as a romance partner. Well those stories are over, and thanks to a $Trillion (literally) of capital investment in infrastructure, engineering, and power plants, this stuff is reasonably safe.

Those stories are just beginning…they’re not close to being over. 


“… imagine a world where there’s no separation between learning and assessment…” — from aiedusimplified.substack.com by Lance Eaton, Ph.D. and Tawnya Means
An interview with Tawnya Means

So let’s imagine a world where there’s no separation between learning and assessment: it’s ongoing. There’s always assessment, always learning, and they’re tied together. Then we can ask: what is the role of the human in that world? What is it that AI can’t do?

Imagine something like that in higher ed. There could be tutoring or skill-based work happening outside of class, and then relationship-based work happening inside of class, whether online, in person, or some hybrid mix.

The aspects of learning that don’t require relational context could be handled by AI, while the human parts remain intact. For example, I teach strategy and strategic management. I teach people how to talk with one another about the operation and function of a business. I can help students learn to be open to new ideas, recognize when someone pushes back out of fear of losing power, or draw from my own experience in leading a business and making future-oriented decisions.

But the technical parts such as the frameworks like SWOT analysis, the mechanics of comparing alternative viewpoints in a boardroom—those could be managed through simulations or reports that receive immediate feedback from AI. The relational aspects, the human mentoring, would still happen with me as their instructor.

Part 2 of their interview is here:


 

A New AI Career Ladder — from ssir.org (Stanford Social Innovation Review) by Bruno V. Manno; via Matt Tower
The changing nature of jobs means workers need new education and training infrastructure to match.

AI has cannibalized the routine, low-risk work tasks that used to teach newcomers how to operate in complex organizations. Without those task rungs, the climb up the opportunity ladder into better employment options becomes steeper—and for many, impossible. This is not a temporary glitch. AI is reorganizing work, reshaping what knowledge and skills matter, and redefining how people are expected to acquire them.

The consequences ripple from individual career starts to the broader American promise of economic and social mobility, which includes both financial wealth and social wealth that comes from the networks and relationships we build. Yet the same technology that complicates the first job can help us reinvent how experience is earned, validated, and scaled. If we use AI to widen—not narrow—access to education, training, and proof of knowledge and skill, we can build a stronger career ladder to the middle class and beyond. A key part of doing this is a redesign of education, training, and hiring infrastructure.

What’s needed is a redesigned model that treats work as a primary venue for learning, validates capability with evidence, and helps people keep climbing after their first job. Here are ten design principles for a reinvented education and training infrastructure for the AI era.

  1. Create hybrid institutions that erase boundaries. …
  2. Make work-based learning the default, not the exception. …
  3. Create skill adjacencies to speed transitions. …
  4. Place performance-based hiring at the core. 
  5. Ongoing supports and post-placement mobility. 
  6. Portable, machine-readable credentials with proof attached. 
  7. …plus several more…
 
© 2025 | Daniel Christian