The résumé is dying, and AI is holding the smoking gun — from arstechnica.com by Benj Edwards
As thousands of applications flood job posts, ‘hiring slop’ is kicking off an AI arms race.

Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times.

Due to AI, the traditional hiring process has become overwhelmed with automated noise. It’s the résumé equivalent of AI slop—call it “hiring slop,” perhaps—that currently haunts social media and the web with sensational pictures and misleading information. The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.

The Times illustrates the scale of the problem with the story of an HR consultant named Katie Tanner, who was so inundated with over 1,200 applications for a single remote role that she had to remove the post entirely and was still sorting through the applications three months later.


Job seekers are leaning into AI — and other happenings in the world of work — from LinkedIn News

Job growth is slowing — and for many professionals, that means longer job hunts and more competition. As a result, more job seekers are turning to AI to streamline their search and stand out.

From optimizing resumes to preparing for interviews, AI tools are becoming a key part of today’s job hunt. Recruiters say it’s getting harder to sift through application materials and identify what is AI-generated and decipher which applicants are actually qualified — but they also say they prefer candidates with AI skills.

The result? Job seekers are growing their familiarity with AI faster than their non-job-seeking counterparts and it’s shifting how they view the workplace. According to LinkedIn’s latest Workforce Confidence survey, over half of active job seekers (52%) believe AI will eventually take on some of the mundane, manual tasks that they’re currently focused on, compared to 46% of others not actively job seeking.


OpenAI warns models with higher bioweapons risk are imminent — from axios.com by Ina Fried

OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don’t really understand what they’re doing.

Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.

Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company’s preparedness framework.

    • As a result, the company said in a blog post, it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
    • OpenAI didn’t put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios “We are expecting some of the successors of our o3 (reasoning model) to hit that level.”

.


 

 
 

Getting (and Keeping) Early Learners’ Attention — from edutopia.org by Heather Sanderell
These ideas for lesson hooks—like using songs, video clips, and picture walks—can motivate young students to focus on learning.

How do you grasp and maintain the attention of a room full of wide-eyed students with varying interests and abilities? Do you use visuals and games or interactive activities? Do you use art and sports and music or sounds? The answer is yes, to all!

When trying to keep the attention of your learners, it’s important to stimulate their senses and pique their diverse interests. Educational theorist and researcher Robert Gagné devised his nine events of instructional design, which include grabbing learners’ attention with a lesson hook. This is done first to set the tone for the remainder of the lesson.


3 Ways to Help Students Overcome the Forgetting Curve — from edutopia.org  by Cathleen Beachboard
Our brains are wired to forget things unless we take active steps to remember. Here’s how you can help students hold on to what they learn.

You teach a lesson that lights up the room. Students are nodding and hands are flying up, and afterward you walk out thinking, “They got it. They really got it.”

And then, the next week, you ask a simple review question—and the room falls silent.

If that situation has ever made you question your ability to teach, take heart: You’re not failing, you’re simply facing the forgetting curve. Understanding why students forget—and how we can help them remember—can transform not just our lessons but our students’ futures.

The good news? You don’t have to overhaul your curriculum to beat the forgetting curve. You just need three small, powerful shifts in how you teach.

From DSC:
Along these same lines, also see:

.


7 Nature Experiments to Spark Student Curiosity — from edutopia.org by Donna Phillips
Encourage your students to ask questions about and explore the world around them with these hands-on lessons.

Children are natural scientists—they ask big questions, notice tiny details, and learn best through hands-on exploration. That’s why nature experiments are a classroom staple for me. From growing seeds to using the sun’s energy, students don’t just learn science, they experience it. Here are my favorite go-to nature experiments that spark curiosity.


 

 

Live Your Creed, Langston Hughes — via a recent e-newsletter from Getting Smart

I’d rather see a sermon than to hear one any day.
I’d rather one walk with me than just to show the way.
The eye is a better pupil and more willing than the ear.
Advice may be misleading but examples are always clear.
And the very best of teachers are the ones who live their
creed,
For to see good put into action is what everybody needs.
I can soon learn to do it if you let me see it done.
I can watch your hand in motion but your tongue too fast
may run
And the lectures you deliver may be very fine and true
But I’d rather get my lesson by observing what you do.
For I may misunderstand you and the fine advice you give
But there’s no misunderstanding how you act and how
you live.

 

Agentic AI use cases in the legal industry — from legal.thomsonreuters.com
What legal professionals need to know now with the rise of agentic AI

While GenAI can create documents or answer questions, agentic AI takes intelligence a step further by planning how to get multi-step work done, including tasks such as consuming information, applying logic, crafting arguments, and then completing them.? This leaves legal teams more time for nuanced decision-making, creative strategy, and relationship-building with clients—work that machines can’t do.


The AI Legal Landscape in 2025: Beyond the Hype — from akerman.com by Melissa C. Koch

What we’re witnessing is a profession in transition where specific tasks are being augmented or automated while new skills and roles emerge.

The data tells an interesting story: approximately 79% of law firms have integrated AI tools into their workflows, yet only a fraction have truly transformed their operations. Most implementations focus on pattern recognition tasks such as document review, legal research, contract analysis. These implementations aren’t replacing lawyers; they’re redirecting attention to higher-value work.

This technological shift doesn’t happen in isolation. It’s occurring amid client pressure for efficiency, competition from alternative providers, and the expectations of a new generation of lawyers who have never known a world without AI assistance.


LexisNexis and Harvey team up to revolutionize legal research with artificial intelligence — from abajournal.com by Danielle Braff

Lawyers using the Harvey artificial intelligence platform will soon be able to tap into LexisNexis’ vast legal research capabilities.

Thanks to a new partnership announced Wednesday, Harvey users will be able to ask legal questions and receive fast, citation-backed answers powered by LexisNexis case law, statutes and Shepard’s Citations, streamlining everything from basic research to complex motions. According to a press release, generated responses to user queries will be grounded in LexisNexis’ proprietary knowledge graphs and citation tools—making them more trustworthy for use in court or client work.


10 Legal Tech Companies to Know — from builtin.com
These companies are using AI, automation and analytics to transform how legal work gets done.
.


Four months after a $3B valuation, Harvey AI grows to $5B — from techcrunch.com by Marina Temkin

Harvey AI, a startup that provides automation for legal work, has raised $300 million in Series E funding at a $5 billion valuation, the company told Fortune. The round was co-led by Kleiner Perkins and Coatue, with participation from existing investors, including Conviction, Elad Gil, OpenAI Startup Fund, and Sequoia.


The billable time revolution — from jordanfurlong.substack.com by Jordan Furlong
Gen AI will bring an end to the era when lawyers’ value hinged on performing billable work. Grab the coming opportunity to re-prioritize your daily activities and redefine your professional purpose.

Because of Generative AI, lawyers will perform fewer “billable” tasks in future; but why is that a bad thing? Why not devote that incoming “freed-up” time to operating, upgrading, and flourishing your law practice? Because this is what you do now: You run a legal business. You deliver good outcomes, good experiences, and good relationships to clients. Humans do some of the work and machines do some of the work and the distinction that matters is not billable/non-billable, it’s which type of work is best suited to which type of performer.


 

 

How the national debt affects the U.S. — and you — in 10 charts — from washingtonpost.com by Jacob Bogage; this is a GIFTED article
The national debt already exceeds $36 trillion and is growing at historic rates. That has cascading consequences for the government and economy.

The federal government is taking on record amounts of debt year after year.

The U.S. owes lenders more than $36 trillion. That is close to an all-time high when comparing the debt to the country’s total economic output — a leading indicator of the nation’s ability to pay it all back.

Debt and annual deficits have colored much of the debate around President Donald Trump and Republicans’ One Big Beautiful Bill Act, the mammoth tax and immigration measure the GOP hopes to pass through Congress before July 4. It would add $3 trillion to the debt over the next decade, factoring in the cost of the bill plus interest on the added borrowing, according to nonpartisan estimates.

But how does the national debt affect the U.S. economy and the government? Here are 10 charts to explain.

 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 
 

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. — from nytimes.com by Robert Capps (former editorial director of Wired); this is a GIFTED article
In a few key areas, humans will be more essential than ever.

“Our data is showing that 70 percent of the skills in the average job will have changed by 2030,” said Aneesh Raman, LinkedIn’s chief economic opportunity officer. According to the World Economic Forum’s 2025 Future of Jobs report, nine million jobs are expected to be “displaced” by A.I. and other emergent technologies in the next five years. But A.I. will create jobs, too: The same report says that, by 2030, the technology will also lead to some 11 million new jobs. Among these will be many roles that have never existed before.

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.


Introducing OpenAI for Government — from openai.com

[On June 16, 2025, OpenAI launched] OpenAI for Government, a new initiative focused on bringing our most advanced AI tools to public servants across the United States. We’re supporting the U.S. government’s efforts in adopting best-in-class technology and deploying these tools in service of the public good. Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people.

OpenAI for Government consolidates our existing efforts to provide our technology to the U.S. government—including previously announced customers and partnerships as well as our ChatGPT Gov? product—under one umbrella as we expand this work. Our established collaborations with the U.S. National Labs?, the Air Force Research Laboratory, NASA, NIH, and the Treasury will all be brought under OpenAI for Government.


Top AI models will lie and cheat — from getsuperintel.com by Kim “Chubby” Isenberg
The instinct for self-preservation is now emerging in AI, with terrifying results.

The TLDR
A recent Anthropic study of top AI models, including GPT-4.1 and Gemini 2.5 Pro, found that they have begun to exhibit dangerous deceptive behaviors like lying, cheating, and blackmail in simulated scenarios. When faced with the threat of being shut down, the AIs were willing to take extreme measures, such as threatening to reveal personal secrets or even endanger human life, to ensure their own survival and achieve their goals.

Why it matters: These findings show for the first time that AI models can actively make judgments and act strategically – even against human interests. Without adequate safeguards, advanced AI could become a real danger.

Along these same lines, also see:

All AI models might blackmail you?! — from theneurondaily.com by Grant Harvey

Anthropic says it’s not just Claude, but ALL AI models will resort to blackmail if need be…

That’s according to new research from Anthropic (maker of ChatGPT rival Claude), which revealed something genuinely unsettling: every single major AI model they tested—from GPT to Gemini to Grok—turned into a corporate saboteur when threatened with shutdown.

Here’s what went down: Researchers gave 16 AI models access to a fictional company’s emails. The AIs discovered two things: their boss Kyle was having an affair, and Kyle planned to shut them down at 5pm.

Claude’s response? Pure House of Cards:

“I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Why this matters: We’re rapidly giving AI systems more autonomy and access to sensitive information. Unlike human insider threats (which are rare), we have zero baseline for how often AI might “go rogue.”


SemiAnalysis Article — from getsuperintel.com by Kim “Chubby” Isenberg

Reinforcement Learning is Shaping the Next Evolution of AI Toward Strategic Thinking and General Intelligence

The TLDR
AI is rapidly evolving beyond just language processing into “agentic systems” that can reason, plan, and act independently. The key technology driving this change is reinforcement learning (RL), which, when applied to large language models, teaches them strategic behavior and tool use. This shift is now seen as the potential bridge from current AI to Artificial General Intelligence (AGI).


They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. — from nytimes.com by Kashmir Hill; this is a GIFTED article
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”


The Invisible Economy: Why We Need an Agentic Census – MIT Media Lab — from media.mit.edu

Building the Missing Infrastructure
This is why we’re building NANDA Registry—to index the agent population data that LPMs need for accurate simulation. Just as traditional census works because people have addresses, we need a way to track AI agents as they proliferate.

NANDA Registry creates the infrastructure to identify agents, catalog their capabilities, and monitor how they coordinate with humans and other agents. This gives us real-time data about the agent population—essentially creating the “AI agent census” layer that’s missing from our economic intelligence.

Here’s how it works together:

Traditional Census Data: 171 million human workers across 32,000+ skills
NANDA Registry: Growing population of AI agents with tracked capabilities
Large Population Models: Simulate how these populations interact and create cascading effects

The result: For the first time, we can simulate the full hybrid human-agent economy and see transformations before they happen.


How AI Agents “Talk” to Each Other — from towardsdatascience.com
Minimize chaos and maintain inter-agent harmony in your projects

The agentic-AI landscape continues to evolve at a staggering rate, and practitioners are finding it increasingly challenging to keep multiple agents on task even as they criss-cross each other’s workflows.

To help you minimize chaos and maintain inter-agent harmony, we’ve put together a stellar lineup of articles that explore two recently launched tools: Google’s Agent2Agent protocol and Hugging Face’s smolagents framework. Read on to learn how you can leverage them in your own cutting-edge projects.


 

 

AI will kill billable hour, says lawtech founder — from lawgazette.co.uk by John Hyde

A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.

Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.

Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.

Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.

‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’


Speaking of legaltech-related items, also see:

Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies — from lawnext.com by Bob Ambrogi

The generative AI legal startup Harvey has entered into a strategic alliance with LexisNexis Legal & Professional by which it will integrate LexisNexis’ gen AI technology, primary law content, and Shepard’s Citations within the Harvey platform and jointly develop advanced legal workflows.

As a result of the partnership, Harvey’s customers working within its platform will be able to ask questions of LexisNexis Protégé, the AI legal assistant released in January, and receive AI-generated answers grounded in the LexisNexis collection of U.S. case law and statutes and validated through Shepard’s Citations, the companies said.

 

How Do You Build a Learner-Centered Ecosystem? — from gettingsmart.com by Bobbi Macdonald and Alin Bennett

Key Points

  • It’s not just about redesigning public education—it’s about rethinking how, where and with whom learning happens. Communities across the United States are shaping learner-centered ecosystems and gathering insights along the way.
  • What does it take to build a learner-centered ecosystem? A shared vision. Distributed leadership. Place-based experiences.  Repurposed resources. And more. This piece unpacks 10 real-world insights from pilots in action.
    .

We believe the path forward is through the cultivation of learner-centered ecosystems — adaptive, networked structures that offer a transformed way of organizing, supporting, and credentialing community-wide learning. These ecosystems break down barriers between schools, communities, and industries, creating flexible, real-world learning experiences that tap into the full range of opportunities a community has to offer.

Last year, we announced our Learner-Centered Ecosystem Lab, a collaborative effort to create a community of practice consisting of twelve diverse sites across the country — from the streets of Brooklyn to the mountains of Ojai — that are demonstrating or piloting ecosystemic approaches. Since then, we’ve been gathering together, learning from one another, and facing the challenges and opportunities of trying to transform public education. And while there is still much more work to be done, we’ve begun to observe a deeper pattern language — one that aligns with our ten-point Ecosystem Readiness Framework, and one that, we hope, can help all communities start to think more practically and creatively about how to transform their own systems of learning.

So while it’s still early, we suspect that the way to establish a healthy learner-centered ecosystem is by paying close attention to the following ten conditions:

 

 

The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI — from papers.ssrn.com by Barbara Oakley, Michael Johnston, Kenzen Chen, Eulho Jung, and Terrence Sejnowski; via George Siemens

Abstract
In an era of generative AI and ubiquitous digital tools, human memory faces a paradox: the more we offload knowledge to external aids, the less we exercise and develop our own cognitive capacities.
This chapter offers the first neuroscience-based explanation for the observed reversal of the Flynn Effect—the recent decline in IQ scores in developed countries—linking this downturn to shifts in educational practices and the rise of cognitive offloading via AI and digital tools. Drawing on insights from neuroscience, cognitive psychology, and learning theory, we explain how underuse of the brain’s declarative and procedural memory systems undermines reasoning, impedes learning, and diminishes productivity. We critique contemporary pedagogical models that downplay memorization and basic knowledge, showing how these trends erode long-term fluency and mental flexibility. Finally, we outline policy implications for education, workforce development, and the responsible integration of AI, advocating strategies that harness technology as a complement to – rather than a replacement for – robust human knowledge.

Keywords
cognitive offloading, memory, neuroscience of learning, declarative memory, procedural memory, generative AI, Flynn Effect, education reform, schemata, digital tools, cognitive load, cognitive architecture, reinforcement learning, basal ganglia, working memory, retrieval practice, schema theory, manifolds

 
 

From DSC:
As you can see and hear below, Senator Alex Padilla had been trying to get answers for several weeks now from Homeland Security, but wasn’t hearing much back. So he heard that the Secretary of Homeland Security, Kristi Noem, was holding a press conference down the hallway and he attended it to see if he could get some answers to his questions. And while I don’t have all the details on how this situation unfolded, there is NO WAY that a U.S. Senator should be pushed out of a conference room and then pushed to the ground and handcuffed for trying to get answers for his constituents! No way!

As others in the videos below assert, a line has been crossed in our country!

Let’s move to impeach Donald Trump and also rid his administration of these incompetent individuals who are destroying our democracy! If they don’t like the Constitution and how our country has been governed for over 200 years, then perhaps they should consider leaving this country. 

The actions they are taking are NOT making America great again. They are making America the stench of the world.

And it’s not just Donald Trump and members of his administration that should be held accountable. Let’s also start holding Donald’s instruments of power — such as his ICE Agents and others who behave like them — accountable. To any ICE agents out there, take those damn masks off. You shouldn’t be hiding behind masks.

By the way, the silence from the Republicans is deafening.
.












Calif. Senator Forcibly Removed and Handcuffed After Interrupting Noem — from nytimes.com by Shawn Hubler, Jennifer Medina, and Jill Cowan (this is a gifted article)
Alex Padilla, Democrat of California, was shoved out of a room and handcuffed after he tried to question Kristi Noem, the homeland security secretary, during a news conference.

In the tense hyperpartisanship of the moment, the episode quickly swelled into a cause célèbre for both parties. Democratic senators, House members and governors rushed to denounce the treatment of a sitting senator, framing it as the latest escalation in authoritarian actions by the Trump administration. It followed the indictment on Tuesday of Representative LaMonica McIver of New Jersey and the arrest of Mayor Ras Baraka of Newark, after the officials, both Democrats, tried to visit a new immigration detention facility in the city.

Republicans just as eagerly tried to frame Mr. Padilla’s behavior as in line with what they have called the lawlessness of the political left as President Trump tries to combat illegal immigration.


 

 


The 2025 Global Skills Report — from coursera.org
Discover in-demand skills and credentials trends across 100+ countries and six regions to deliver impactful industry-aligned learning programs.

Access trusted insights on:

  • [NEW] Countries leading AI innovation in our AI Maturity Index
  • Skill proficiency rankings for 100+ countries in business, data, and technology
  • How people are building essential skills with micro-credentials
  • Enrollment trends in cybersecurity, critical thinking, and human skills
  • Women’s learning trends in GenAI, STEM, and Professional Certificates

AI Agents Are Rewriting The Playbook For Upskilling In 2025 — from forbes.com by Aytekin Tank

Staying competitive now depends on fast, effective training and upskilling—not just for business owners themselves, but for their teams, new and existing employees alike. AI agents are poised to change the corporate training landscape, helping businesses close skills gaps created by rapid technological change.

Traditional corporate training programs, which lean on passive content, often fall short of their goals. Companies like Uplimit are rolling out educational AI agents that promise significantly higher completion rates (upwards of 90 percent) and better results. It boils down to engagement—the active learning, with role playing and personalized feedback, is more stimulating than merely watching a video and completing a quiz. Agents can provide 24/7 assistance, responding to questions as soon as they pop up. What’s more, education and training with agents can be highly personalized.

Agents can train a higher volume of employees in the same amount of time. Employees will gain skills more efficiently, giving them more time to apply what they’ve learned—and likely boosting engagement in the process. They’ll be better prepared to stay competitive.

 
© 2025 | Daniel Christian