You Can’t Future-Proof Your Career From AI, But You Can Do This — from builtin.com by Liz Tran
Agility has become the most important skill to cultivate in today’s job market. Here’s how to get started.

Summary: Job seekers facing future panic should prioritize agility over information consumption. Build it by focusing on 30-day action experiments, reframing resumes around durable skills like problem-solving and embracing uncertainty through stretch applications and real-world feedback.

The antidote is what I call AQ — the agility quotient — which is your capacity to face change, disappointment and uncertainty without losing your footing. Unlike IQ, which measures what you know, AQ measures how fast you adapt when the rules change. Right now, it’s the most important career asset you have. Here’s how to build it.

What Is Agility Quotient (AQ)?
AQ is a measure of an individual’s capacity to adapt quickly when rules, industries or circumstances change. Unlike IQ, which focuses on existing knowledge, AQ emphasizes the ability to face uncertainty and disappointment without losing one’s footing, prioritizing action and iteration over exhaustive planning.

 

The Course Is Dying as the Unit of Learning — from drphilippahardman.substack.com by Dr Philippa Hardman
Here’s why, and what’s replacing It

What the Bleeding Edge Looks like in Practice
So what does “the new stack” actually look like when organisations lean into this? Here are four real patterns already in play.

Engineering: from engine courses to in-workflow AI coaching.
Product development: from courses to craft-specific agents.
Compliance: from annual course to nudge systems.|
Enablement systems, not catalogues.

 

Google expands Search Live globally with voice and camera AI — from digitaltrends.com by Varun Mirchandani
The feature is now available in 200+ countries with multilingual support

Think of it as Google Search… but you talk to it. Search Live lets users ask questions using voice or even their phone’s camera, both on Android and iOS, via the Google App, and get spoken responses along with relevant web links.

This is a pretty big shift. Google isn’t just improving search, but it’s also slowly replacing the whole “type and scroll” experience. With Search Live, users can talk, ask follow-ups, and interact naturally, making it feel more like a conversation than a query. It’s basically ChatGPT-style interaction, but baked right into Google Search.

.

 
 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 

Here is Chris Martin’s posting on LinkedIn.com:


Here is Dominik Mate Kovacs’ posting on LinkedIn.com:


The AI ‘hivemind’: Why so many student essays sound alike — from hechingerreport.org by Jill Barshay
A study of more than 70 large language models found similar answers to brainstorming and creative writing prompts

The answers were frequently indistinguishable across different models by different companies that have different architectures and use different training data. The metaphors, imagery, word choices, sentence structures — even punctuation — often converged. Jiang’s team called this phenomenon “inter-model homogeneity” and quantified the overlaps and similarities. To drive the point home, Jiang titled her paper, the “Artificial Hivemind.” The study won a best paper award at the annual conference on Neural Information Processing Systems in December 2025, one of the premier gatherings for AI research.


AI Has No Moral Compass. Do You? — from michelleweise.substack.com by Michelle Weise & Dana Walsh
Why the Age of AI Demands We Take Character Formation Seriously

Here’s something to chew on:

Anthropic, the company behind Claude — a chatbot used by 30 million users per month — has exactly one person (whom we know of) working on AI ethics. One. A young Scottish philosopher is doing the vital work of training a large language model to discern right from wrong.

I don’t say this to shame Anthropic. In fact, Anthropic appears to be the only company (that we know of) being explicit about the moral foundations and reasoning of its chatbot. Hundreds of millions of users worldwide are leveraging tools from other LLMs that do not appear to have an explicit moral compass being cultivated from within.

I raise this because this is yet another example of where we are: extraordinary technical power advancing without an equally strong moral infrastructure to support it.

Why do we keep producing people who are skilled but not wise?

 

Across the divide: reimagining faculty-staff collaboration in higher education — from timeshighereducation.com by Saskia van de Gevel
Academic units do best when they harness different viewpoints – from field scientists and curriculum designers to extension professionals – to drive innovation and relevance. Saskia van de Gevel offers proactive advice

Universities are not sustained by individual leaders or isolated units. They are sustained by teams of people who bring different kinds of expertise to a shared mission. When faculty and professional staff collaborate as genuine partners – aligned around outcomes, clear about roles and committed to mutual respect – institutions become more resilient, innovative and effective.

Also from timeshighereducation.com, see:

Again, we don’t send them 200 CVs. We might send 20, but they’re meticulously shortlisted. The employer saves time, the student feels they are being taken seriously and trust builds quickly on both sides.

And because we work closely with employers, we learn something universities often struggle to find out early enough: what the market is asking for now.

What academics need to know: we can’t do this without you
If I could say one thing to academic colleagues anywhere, it’s that employability can’t sit next to the curriculum. It has to live with it.

 

5 Tech Strategies to Enhance Student-Led Learning — from edutopia.org by Rachelle Dené Poth
While technology has potential to distract students, it can also boost engagement and help them actively demonstrate their learning.

Over the years, I have learned that engagement doesn’t happen simply by adding technology. It increases when we give students more ownership by designing experiences that allow them to build, collaborate, reflect, and teach one another. Depending on how we use it, technology can either amplify engagement or distract from it. Technology can help build students’ confidence in learning, but it can also lead to passivity. When technology is used to amplify students’ voice, choice, and ownership in learning, their engagement will naturally increase.

Here are five strategies and some digital tools that can be used across grade levels and content areas to boost student engagement, build confidence, foster collaboration, and support meaningful learning experiences.


Project-Based Learning (PBL)
Implementing a PBL Design Challenge in Your School — from edutopia.org by Lisa Beck & Kim Mishkin
A weeklong, schoolwide project-based learning challenge encourages students to try to tackle meaningful problems.

For the past five years, Hudson Lab School (HLS), a K–8 progressive school committed to project?based learning (PBL), has kicked off each school year with an exciting tradition: Design Challenge Week. In five days, students take on a real?world problem, explore each phase of the design process, and present what they created and learned to an authentic audience. Design Challenge Week introduces concepts that students will revisit all year and offers a model for how any educational setting could experiment with PBL on a smaller scale. Even short, well?designed challenges can lead to deeply engaged learning experiences.


How to Give Students Directions They Actually Understand — from edutopia.org by Mary Davenport
Making small changes in your instructions can have a significant impact on students’ understanding and engagement.

No more than a minute after you’ve provided instruction on the day’s targeted content and given students directions for their next task, some brave soul utters the line that brings tired teachers to their knees: “What are we supposed to be doing?”

None of us want this. As teachers, we all want students to fully understand what they’re supposed to be doing so that they can be successful as they do it.

Good news: A few small changes in how we give directions can be the lever that boosts student understanding and engagement.

 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

How to Get Consistent, On-Brand Course Images from Any AI Image Tool — from drphilippahardman.substack.com by Dr. Philippa Hardman
A 3-step workflow that works every time — whatever AI tool you’re using

Most designers try to describe their way to an image. That’s the wrong approach. The goal is to show the tool the world it should be working in, then give it the minimum it needs to place your subject inside that world.

Every long, over-specified prompt is a sign that your visual inputs aren’t doing enough work.

The fix is an 3-step process which gives you superpowers in AI image generation…


How AI Could Transform, or Replace, the LMS — from futureupodcast.com by Jeff Selingo, Michael Horn, and Matthew Pittinsky

Tuesday, March 10, 2026 – For 30 years now, colleges have relied on the Learning Management System, or LMS, as a key portal for professors and students to teach and learn. It’s a tool that has helped colleges adapt to online learning and bring digital tools to classroom teaching. But generative AI seems poised to disrupt the LMS. And it’s unclear whether the LMS will evolve—or be replaced altogether. For this episode, Jeff and Michael talk with a pioneer of the technology, Matthew Pittinsky, about the lessons of past moments of tech disruption like the smartphone and cloud computing and about what could be different this time. This episode is made with support from Ascendium Education Group.


Gemini, Explained — from wondertools.substack.com by Jeremy Caplan
5 features worth your time — tested and compared

Google’s AI, Gemini, has quickly become one of the AI tools I rely on most. It builds dashboards and creates remarkable infographics. It spins out comprehensive research reports in minutes that would once have taken days to assemble.

It’s improving every month. On March 13, Google announced Ask Maps, so you can query Gemini about things like “Which nearby tennis courts are open with lights so I can play tonight?” On March 10, Gemini added new integrations to build, summarize, and analyze your Google Docs, Sheets, and Slides.

In today’s post below: catch up on the Gemini features worth your time, candid comparisons with other AI tools, and answers to the questions I hear most.


How we’re reimagining Maps with Gemini — from blog.google
Ask Maps answers your real-world questions with a conversation, and Immersive Navigation makes your route more intuitive.

Today, Google Maps is fundamentally changing what a map can do. By bringing together the world’s freshest map with our most capable Gemini models, we’re transforming exploration into a simple conversation and making driving more intuitive than ever with our biggest navigation upgrade in over a decade.

Ask anything about any place
We’re introducing Ask Maps, a new conversational experience that answers complex, real-world questions a map could never answer before. Now you can ask for things like, “My phone is dying — where can I charge it without having to wait in a long line for coffee?” or “Is there a public tennis court with lights on that I can play at tonight?” Previously, finding this information meant lots of research and sifting through reviews. But now, you can just tap the “Ask Maps” button and get your questions answered conversationally, with a customized map to help you visualize your options.

 

Cinematic Prompting Without IP — from heatherbcooper.substack.com by Heather Cooper
Stop saying “Blade Runner” style.

Beginner Prompt Structure
If you’re new to prompting, start with this framework:
[Subject] + [Description] + [Setting] + [Lighting] + [Style/Medium]

The advanced framework adds three layers:
[Lens] + [Subject + Action] + [Environment + Atmosphere] + [Lighting + Colour] + [Mood/Emotion] + [Technical Detail]

 
 

Something Big Is Happening — from shumer.dev by Matt Shumer; see below from the BIG Questions Institute, where I got this article from

I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.


They’ve now done it. And they’re moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.


What “Something Big Is Happening” Means for Schools — from/by the BIG Questions Institute
Matt Shumer’s newsletter post Something Big is Happening has been read over 80 million times within the week when it was published, on February 9.

Still, it’s worth reading Shumer’s post. Given the claims and warnings in Something Big Is Happening (and countless other articles), how would you truly, honestly respond to these questions:

  • What will the purpose of school be in 5 years?
  • What are we doing now that we must leave behind right away?
  • What can we leave behind gradually?
  • What does rigor look like in this AI-powered world?
  • Does our strategy look like making adjustments at the margins or are we preparing our students for a fundamental shift?
  • What is our definition of success? How do the the implications of AI and jobs (and other important forces, from geopolitical shifts and climate change, to mental health needs and shifting generational values) impact the outcomes we prioritize? What is the story of success we want to pass on to our students and wider community?
 

Claude Code Puts Tech Workers on Notice — from builtin.com by Matthew Urwin
Anthropic is flexing its new and improved Claude Code, which used vibe coding to build the company’s latest tool, Cowork. The feat has inspired both excitement and angst within the tech world as the future of work continues to grow more uncertain.

Summary:
Anthropic is becoming the leader in enterprise artificial intelligence, thanks to upgrades made to Claude Code. The coding tool practically built Anthropic’s Cowork product — sparking both excitement around the possibilities of vibe coding and fears around the job outlook of tech workers.

 

The Campus AI Crisis — by Jeffrey Selingo; via Ryan Craig
Young graduates can’t find jobs. Colleges know they have to do something. But what?

Only now are colleges realizing that the implications of AI are much greater and are already outrunning their institutional ability to respond. As schools struggle to update their curricula and classroom policies, they also confront a deeper problem: the suddenly enormous gap between what they say a degree is for and what the labor market now demands. In that mismatch, students are left to absorb the risk. Alina McMahon and millions of other Gen-Zers like her are caught in a muddled in-between moment: colleges only just beginning to think about how to adapt and redefine their mission in the post-AI world, and a job market that’s changing much, much faster.

“Colleges and universities face an existential issue before them,” said Ryan Craig, author of Apprentice Nation and managing director of a firm that invests in new educational models. “They need to figure out how to integrate relevant, in-field, and hopefully paid work experience for every student, and hopefully multiple experiences before they graduate.”

 
© 2025 | Daniel Christian