From DSC:
Forgive us world for our current President, who stoops to a new low almost every day. Below is yet another example of that. He’s an embarrassment to me and to many others in our nation. He twists truths into lies, and lies into “truths” (such as he does on “Truth” Social). Lies are his native tongue. (To those who know scripture, this is an enlightening and descriptive statement.)

And speaking of matters of faith, I think God is watching us closely, as numerous moral/ethical tests are presented to our society and to our culture. How will we and our leadership respond to these tests?

For examples:

  • Are we joining the mockery of justice in our country, or are we fighting against these developments?
  • Do we support it when Trump makes a fake video of a former President, or do we find it reprehensible? Especially when we realize it’s yet another attempt at deflecting our attention away from where Trump does NOT want our attention to be –> i.e., away from Trump’s place within the Epstein files.

Trump Posts Fake Video Showing Obama Arrest — from nytimes.com by Matthew Mpoke Bigg; this is a gifted article
President Trump shared what appeared to be an A.I.-generated video of former President Barack Obama being detained in the Oval Office.

President Trump reposted a fake video showing former President Barack Obama being arrested in the Oval Office, as Trump administration officials continue to accuse Mr. Obama of trying to harm Mr. Trump’s campaign during the 2016 election, and the president seeks to redirect conversation from the Epstein files.

The short video, which appears to have been generated by artificial intelligence and posted on TikTok before being reposted on Mr. Trump’s Truth Social account on Sunday…

The fake video purports to show F.B.I. agents bursting into the meeting, pushing Mr. Obama into a kneeling position and putting him in handcuffs as Mr. Trump looks on smiling, while the song “Y.M.C.A.” by the Village People plays. Later, the fake video shows Mr. Obama in an orange jumpsuit pacing in a cell. 

Also relevant/see:

Trump Talks About Anything but Epstein on His Social Media Account — from nytimes.com by Luke Broadwater; this is a gifted article
On Truth Social, the president railed against Democrats and shared a wacky video.

Dogged for weeks over his administration’s refusal to release the Epstein files, President Trump spent the weekend posting on social media about, well, anything else.

On Sunday, the president railed against Senator Adam Schiff, Democrat of California, long a prime target. He attacked Samantha Power, the former administrator of U.S.A.I.D. He posted a fake video of former President Barack Obama being arrested and a fake photo of Mr. Obama and members of his administration in prison garb. He threatened to derail a deal for a new football stadium for the Washington Commanders if the team did not take back its old name, the Redskins.

 

What borrowers should know about student loan changes in the One Big Beautiful Bill — from npr.org by Cory Turner

If you’re a federal student loan borrower or about to become one, your head may be spinning.

On July 4, when President Trump signed the One Big Beautiful Bill Act into law, he also greenlit a history-making overhaul of the federal student loan system — one that will affect the lives of many, if not most, of the United States’ nearly 43 million student loan borrowers.

And boy is it a lot to unpack, with new, tighter borrowing limits and dramatically reduced repayment options, to name just a few of the sweeping changes.

Under the One Big Beautiful Bill Act, borrowers in SAVE will have to change plans by July 1, 2028, when SAVE will be officially shut down. If they wait, though they currently can’t be required to make payments, they will see their loans explode with interest.

 

Blood in the Instructional Design Machine? — from drphilippahardman.substack.com by Dr. Philippa Hardman
The reality of AI, job degradation & the likely future of Instructional Design

This raises a very important, perhaps even existential question for our profession: do these tools free a designer from the mind-numbing drudgery of content conversion (the “augmented human”)? Or do they automate the core expertise of the learning professional’s role, e.g. selecting instructional startegies, structuring narratives and designing a learning flow, in the process reducing the ID’s role to simply finding the source file and pushing a button (the “inverted centaur”)?

The stated aspiration of these tool builders seems to be a future where AI means that the instructional designer’s value shifts decisively from production to strategy. Their stated goal is to handle the heavy lifting of content generation, allowing the human ID to provide the indispensable context, creativity, and pedagogical judgment that AI cannot replicate.

However, the risk of these tools lies in how we use them, and the “inverted centaur” model remains deeply potent and possible. In an organisation that prioritises cost above all, these same tools can be used to justify reducing the ID role to the functional drudgery of inputting a PDF and supervising the machine.

The key to this paradox lies in a crucial data point: spending on outside products and services has jumped a dramatic 23% to $12.4 billion. 

This signals a fundamental shift: companies are reallocating funds from large internal teams toward specialised consultants and advanced learning technologies like AI. L&D is not being de-funded; it is being re-engineered.

 

New Lightcast Report: AI Skills Command 28% Salary Premium as Demand Shifts Beyond Tech Industry — from lightcast.io; via Paul Fain
First-of-its-kind analysis reveals specific AI skills employers need most, enabling targeted workforce training strategies across all career areas

July 23, 2025 – Lightcast, the global leader in labor market intelligence, today released “Beyond the Buzz: Developing the AI Skills Employers Actually Need,” a comprehensive analysis revealing that artificial intelligence has fundamentally transformed hiring patterns across the world of work. The report, based on analysis of over 1.3 billion job postings, shows that job postings including AI skills offer 28% higher salaries—nearly $18,000 more per year—than those without such capabilities.

More importantly, the research analyzes specific skills based on their growth across job postings, their importance in the workforce, and their exposure to AI. This shows exactly which AI skills create value in which contexts, solving the critical challenge facing educators and workforce development leaders: moving beyond vague “AI literacy” to precise, targeted training that delivers measurable results.


Also via Paul Fain:


Despite growing awareness, however, participation in skill development is limited. In 2024, less than half of U.S. employees (45%) participated in training or education to build new skills for their current job. About one in three employees (32%) who are hoping to move into a new role within the next year strongly agree that they have the skills needed to be exceptional in that role.

 

I Teach Creative Writing. This Is What A.I. Is Doing to Students. — from nytimes.com by Meghan O’Rourke; this is a gifted article.

We need a coherent approach grounded in understanding how the technology works, where it is going and what it will be used for.

From DSC:
I almost feel like Meghan should right the words “this week” or “this month” after the above sentence. Whew! Things are moving fast.

For example, we’re now starting to see more agents hitting the scene — software that can DO things. But that can open up a can of worms too. 

Students know the ground has shifted — and that the world outside the university expects them to shift with it. A.I. will be part of their lives regardless of whether we approve. Few issues expose the campus cultural gap as starkly as this one.ce 

From DSC:
Universities and colleges have little choice but to integrate AI into their programs and offerings. There’s enough pressure on institutions of traditional higher education to prove their worth/value. Students and their families want solid ROI’s. Students know that they are going to need AI-related skills (see the link immediately below for example), or they are going to be left out of the competitive job search process.

A relevant resource here:

 

From DSC:
In looking at
 
MyNextChapter.ai — THIS TYPE OF FUNCTIONALITY of an AI-based chatbot talking to you re: good fits for a future job — is the kind of thing that could work well in this type of vision/learning platform. The AI asks you relevant career-oriented questions, comes up with some potential job fits, and then gives you resources about how to gain those skills, who to talk with, organizations to join, next steps to get your foot in the door somewhere, etc.

The next gen learning platform would provide links to online-based courses, blogs, peoples’ names on LinkedIn, courses from L&D organizations or from institutions of higher education or from other entities/places to obtain those skills (similar to the ” Action Plan” below from MyNextChapter.ai).

 

Trump officials accused of defying 1 in 3 judges who ruled against him — from washingtonpost.com by Justin Jouvenal
A comprehensive analysis of hundreds of lawsuits against Trump policies shows dozens of examples of defiance, delay and dishonesty, which experts say pose an unprecedented threat to the U.S. legal system.

President Donald Trump and his appointees have been accused of flouting courts in a third of the more than 160 lawsuits against the administration in which a judge has issued a substantive ruling, a Washington Post analysis has found, suggesting widespread noncompliance with America’s legal system.

Plaintiffs say Justice Department lawyers and the agencies they represent are snubbing rulings, providing false information, failing to turn over evidence, quietly working around court orders and inventing pretexts to carry out actions that have been blocked.

.

The Post examined 337 lawsuits filed against the administration since Trump returned to the White House and began a rapid-fire effort to reshape government programs and policy. As of mid-July, courts had ruled against the administration in 165 of the lawsuits. The Post found that the administration is accused of defying or frustrating court oversight in 57 of those cases — almost 35 percent.


DC: How is making a mockery of the justice system making America great again? I don’t think any one of us would benefit from living in a land with no laws. It would be absolute chaos.


 

PODCAST: Did AI “break” school? Or will it “fix” it? …and if so, what can we do about it? — from theneurondaily.com by Grant Harvey, Corey Noles, Grant Harvey, & Matthew Robinson

In Episode 5 of The Neuron Podcast, Corey Noles and Grant Harvey tackle the education crisis head-on. We explore the viral UCLA “CheatGPT” controversy, MIT’s concerning brain study, and innovative solutions like Alpha School’s 2-hour learning model. Plus, we break down OpenAI’s new $10M teacher training initiative and share practical tips for using AI to enhance learning rather than shortcut it. Whether you’re a student, teacher, or parent, you’ll leave with actionable insights on the future of education.

 
 

Tech Layoffs 2025: Why AI is Behind the Rising Job Cuts — from finalroundai.com by Kaustubh Saini, Jaya Muvania, and Kaivan Dave; via George Siemens
507 tech workers lose their jobs to AI every day in 2025. Complete breakdown of 94,000 job losses across Microsoft, Tesla, IBM, and Meta – plus which positions are next.
.


.


I’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells — from time.com by Paul Tudor Jones

Amid all the talk about the state of our economy, little noticed and even less discussed was June’s employment data. It showed that the unemployment rate for recent college graduates stood at 5.8%, topping the national level for the first and only time in its 45-year historical record.

It’s an alarming number that needs to be considered in the context of a recent warning from Dario Amodei, CEO of AI juggernaut Anthropic, who predicted artificial intelligence could wipe out half of all entry-level, white-collar-jobs and spike unemployment to 10-20% in the next one to five years.

The upshot: our college graduates’ woes could be just the tip of the spear.



I almost made a terrible mistake last week. — from justinwelsh.me by Justin Welsh; via Roberto Ferraro

But as I thought about it, it just didn’t feel right. Replying to people sharing real gratitude with a copy-paste message seemed like a terribly inauthentic thing to do. I realized that when you optimize the most human parts of your business, you risk removing the very reason people connect with you in the first place.


 

Is graduate employability a core university priority? — from timeshighereducation.com by Katherine Emms and Andrea Laczik
Universities, once judged primarily on the quality of their academic outcomes, are now also expected to prepare students for the workplace. Here’s how higher education is adapting to changing pressures

A clear, deliberate shift in priorities is under way. Embedding employability is central to an Edge Foundation report, carried out in collaboration with UCL’s Institute of Education, looking at how English universities are responding. In placing employability at the centre of their strategies – not just for professional courses but across all disciplines – the two universities that were analysed in this research show how they aim to prepare students for the labour market overall. Although the employability strategy is initialled by the universities’ senior leaders, the research showed that realising this employability strategy must be understood and executed by staff at all levels across departments. The complexity of offering insights into industry pathways and building relevant skills involves curricula development, student-centred teaching, careers support, partnership work and employer engagement.


Every student can benefit from an entrepreneurial mindset — from timeshighereducation.com by Nicolas Klotz
To develop the next generation of entrepreneurs, universities need to nurture the right mindset in students of all disciplines. Follow these tips to embed entrepreneurial education

This shift demands a radical rethink of how we approach entrepreneurial mindset in higher education. Not as a specialism for a niche group of business students but as a core competency that every student, in every discipline, can benefit from.

At my university, we’ve spent the past several years re-engineering how we embed entrepreneurship into daily student life and learning.

What we’ve learned could help other institutions, especially smaller or resource-constrained ones, adapt to this new landscape.

The first step is recognising that entrepreneurship is not only about launching start-ups for profit. It’s about nurturing a mindset that values initiative, problem-solving, resilience and creative risk-taking. Employers increasingly want these traits, whether the student is applying for a traditional job or proposing their own venture.


Build foundations for university-industry partnerships in 90 days— from timeshighereducation.com by Raul Villamarin Rodriguez and Hemachandran K
Graduate employability could be transformed through systematic integration of industry partnerships. This practical guide offers a framework for change in Indian universities

The most effective transformation strategy for Indian universities lies in systematic industry integration that moves beyond superficial partnerships and towards deep curriculum collaboration. Rather than hoping market alignment will occur naturally, institutions must reverse-engineer academic programmes from verified industry needs.

Our six-month implementation at Woxsen University demonstrates this framework’s practical effectiveness, achieving more than 130 industry partnerships, 100 per cent faculty participation in transformation training, and 75 per cent of students receiving industry-validated credentials with significantly improved employment outcomes.


 

How Do You Teach Computer Science in the A.I. Era? — from nytimes.com by Steve Lohr; with thanks to Ryan Craig for this resource
Universities across the country are scrambling to understand the implications of generative A.I.’s transformation of technology.

The future of computer science education, Dr. Maher said, is likely to focus less on coding and more on computational thinking and A.I. literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions.

A.I. literacy is an understanding — at varying depths for students at different levels — of how A.I. works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

At Carnegie Mellon, as faculty members prepare for their gathering, Dr. Cortina said his own view was that the coursework should include instruction in the traditional basics of computing and A.I. principles, followed by plenty of hands-on experience designing software using the new tools.

“We think that’s where it’s going,” he said. “But do we need a more profound change in the curriculum?”

 

The EU’s Legal Tech Tipping Point – AI Regulation, Data Sovereignty, and eDiscovery in 2025 — from jdsupra.com by Melina Efstathiou

The Good, the Braver and the Curious.
As we navigate through 2025, the European legal landscape is undergoing a significant transformation, particularly in the realms of artificial intelligence (AI) regulation and data sovereignty. These changes are reshaping how legal departments and more specifically eDiscovery professionals operate, compelling them to adapt to new compliance requirements and technological advancements.

Following on from our blog post on Navigating eDisclosure in the UK and Practice Direction 57AD, we are now moving on to explore AI regulation in the greater European spectrum, taking a contrasting glance towards the UK and the US as well, at the close of this post.


LegalTech’s Lingering Hurdles: How AI is Finally Unlocking Efficiency in the Legal Sector — from techbullion.co by Abdul Basit

However, as we stand in mid-2025, a new paradigm is emerging. Artificial Intelligence, once a buzzword, is now demonstrably addressing many of the core issues that have historically plagued LegalTech adoption and effectiveness, ushering in an era of unprecedented efficiency. Legal tech specialists like LegalEase are leading the way with some of these newer solutions, such as Ai powered NDA drafting.

Here’s how AI is making profound efficiencies:

    • Automated Document Review and Analysis:
    • Intelligent Contract Lifecycle Management (CLM):
    • Enhanced Legal Research:
    • Predictive Analytics for Litigation and Risk:
    • Streamlined Practice Management and Workflow Automation:
    • Personalized Legal Education and Training:
    • Improved Client Experience:

The AI Strategy Potluck: Law Firms Showing Up Empty-Handed, Hungry, And Weirdly Proud Of It — from abovethelaw.com by Joe Patrice
There’s a $32 billion buffet of time and money on the table, and the legal industry brought napkins.

The Thomson Reuters “Future of Professionals” report(Opens in a new window) just dropped and one stat standing out among its insights is that organizations with a visible AI strategy are not only twice as likely to report growth, they’re also 3.5 times more likely to see actual, tangible benefits from AI adoption.

AI Adoption Strategies


Speaking of legal-related items as well as tech, also see:

  • Landmark AI ruling is a blow to authors and artists — from popular.info by Judd Legum
    This week, a federal judge, William Alsup, rejected Anthropic’s effort to dismiss the case and found that stealing books from the internet is likely a copyright violation. A trial will be scheduled in the future. If Anthropic loses, each violation could come with a fine of $750 or more, potentially exposing the company to billions in damages. Other AI companies that use stolen work to train their models — and most do — could also face significant liability.
 
 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 
© 2025 | Daniel Christian