AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Legal AI Access at 83%, But Trust Issues Remain — from artificiallawyer.com

A new survey of over 200 inhouse and law firm leaders provides solid evidence that while AI tools are now ‘standard’ across our sector, that trust in AI outputs fundamentally drives usage, along with ROI – and vice versa.

The data, from ALSP Factor, shows that 83% had ‘broad AI access’, which is up from 61% in 2025, and in itself is a very positive development that tells us legal AI is now becoming ubiquitous for commercial lawyers, with around 54% using such tools ‘often’.

 

From DSC:
I have been proposing that the AI-based learning platform of the future will be constantly doing this — every single day. It will know what the in-demand skills are — at any given moment in time. It will then be able to direct you to resources that will help you gain those skills. Though in my vision, the system is querying actual/open job descriptions, not analyzing learning data from enterprise learners. Perhaps I should add that to the vision.


Coursera’s Job Skills Report 2026: Top skills for your students — from coursera.org

The Job Skills Report 2026 analyzes learning data from more than 6 million enterprise learners to identify the future job skills organizations need most. It’s designed for HR and L&D leaders; data, IT, and software & product development leaders; higher education administrators; and government agencies seeking actionable insights on workforce skills trends and AI-driven transformation.

Drawing on data from 6 million enterprise learners across nearly 7,000 organizations, the Job Skills Report 2026 guides you through the skills reshaping the global economy. This year’s analysis spans Data, IT, and Software & Product Development—and the Generative AI skills becoming essential for every role.

 

From DSC:
The types of postings/articles (such as the one below) make me ask, are we not shooting ourselves in the foot with AI and recent college graduates? If the bottom rungs continue to disappear, internships and apprenticeships can only go so far. There aren’t enough of them — especially valuable ones. So as this article points out, there will be threats to the long-term health of our talent pipelines unless we can take steps to thwart those impacts — and to do so fairly soon.

To me…vocational training and jobs are looking better all the time — i.e., plumbers, carpenters, electricians, mechanics, and more.


Can New Graduates Compete With AI? — from builtin.combyRichard Johnson
The increasing adoption of AI automation is compressing early-career jobs. How should new graduates get a foothold in the economy now?

Summary: AI is hollowing out entry-level roles by automating routine tasks, eliminating a rung on the career ladder. New graduates face intense competition and a rising skill floor. While firms gain short-term productivity, they risk a long-term talent shortage by eliminating junior training grounds.

Conversations about AI have covered all grounds: hype, fear and slop. But while some roll their eyes at yet another automation headline, soon?to?be graduates are watching the labor market with a very different level of urgency. They’re entering a world where the old paradox of needing experience to get experience is colliding with a new reality: AI is absorbing the standardized, routine tasks that once defined entry?level work. The result isn’t just a shift in job descriptions or skill-requirements, but rather a structural reshaping of the career pipeline.

Entry-level workers face an outsized disruption to their long-term career trajectories. They have the least buffer to adapt given their lack of relevant job market experience and heightened financial pressure to secure a job quickly with the student-debt repayment periods for recent graduates looming.

Momentum early in one’s career matters, and the first job on a resume shapes future compensation bands and opportunities. It also serves as a signal for perceived specialization or, at minimum, interest. Losing that foothold has compounding effects to one’s career ladder.


Also relevant/see:

New Anthropic Institute to Study Risks and Economic Effects of Advanced AI — from campustechnology.com by John K. Waters

Key Takeaways

  • Anthropic has launched the Anthropic Institute, a new research effort focused on the biggest societal challenges posed by more powerful AI systems.
  • The institute will study how advanced AI could affect the economy, the legal system, public safety, and broader social outcomes.
  • Anthropic co-founder Jack Clark will lead the institute in a new role as the company’s head of public benefit.
  • The new unit brings together Anthropic’s existing red-teaming, societal impacts, and economic research work, while adding new hires and new research areas.
 
 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

The Future of College in an AI World — from linkedin.com by Jeff Selingo
In today’s issue: The tension over AI in higher ed; application inflation continues and testing is back; what’s the future of the original classroom technology, the learning management system. 


Hundreds of higher ed and industry leaders gathered Tuesday for a summit
on AI and the future of learning at the University of Michigan.
.

Conversations like the one we had at Michigan this week are necessary, but the action rarely matches the ambition.

  • We say the humanities are the operating system of an AI world, yet students and parents don’t believe it. They’re voting with their feet toward STEM, business, and narrowly tailored majors they believe will lead to a job.
  • Meanwhile, colleges are quietly eliminating the very humanities degrees the panelists were championing, employers are cutting the entry rungs off the career ladder for new graduates, and as Podium Education co-founder Christopher Parrish reminded us yesterday, there’s a yawning gap between demand for experience and the internships that actually exist.


AI Music Generators: Teaching With These Catchy AI Tools — from techlearning.com by Erik Ofgang
AI music generators are getting better and better, and there are more applications in the classroom as a result.

Are All AI Music Generators More Or Less The Same?
No. After experimenting with a few various free ones, I found a wide range of quality with the same prompts.

Gemini is the only one I’d currently recommend. It’s user-friendly but limited and only creates 30-second clips. Other music generators could potentially outperform Gemini with prompt adjustments. The ones I tried did better with the instrumentals but struggled more with the lyrics, and that kind of defeated the purpose of the tool for me.


ChatDOC: Teaching With The AI Summarizing Tool — from techlearning.com by Erik Ofgang
ChatDOC lets users turn any PDF into an AI chatbot that can summarize the text, answer questions, and generate quizzes.

What Is ChatDOC?
ChatDOC is an AI designed to help users interact with PDFs of various types, be it research papers, short stories, or chapters from larger works. Users upload a PDF and then have the opportunity to “chat” with that document, that is speak with a chatbot that bases its answers off of the uploaded text.

ChatDOC can perform tasks such as provide a short summary, search for specific terms, explain the overall theme if it’s a work of literature, or unpack the science in a research paper.

Other similar tools are out there, but ChatDOC is definitely one of the better PDF readers I’ve used. Its free version is quick and easy-to-use, and delivers on its promise of providing an AI that can discuss a given document with users and even quiz them on it.


From AI access to workforce readiness — from chieflearningofficer.com by Johnny Hamilton, Amy Stratbucker, & Brad Bigelow
Is your workforce using the right tool with an outdated mindset and playbook? Why old playbooks fall short — and what learning leaders must do next.

The leadership opportunity
Organizations do not need to predict every future AI capability. They need systems that allow people to explore with curiosity, practice safely, reflect deeply and adapt continuously — starting with what they already have and extending as capabilities evolve.

For CLOs, this is a moment to lead from the center of change — designing workforce readiness that keeps pace with accelerating technology while making work more rewarding for employees and more valuable for the organization. That is how AI moves from the promise of transformation to demonstrated readiness and, ultimately, from promise to performance.


Addendums on 3/19/26:
How to Build Practice-Based Learning Activities with AI — from drphilippahardman.substack.com by Dr Philippa Hardman
Four evidence-based methods for designing, building & deploying active learning activities with your favourite LLM

Most L&D teams are using AI to make content faster. The real opportunity is using it as a practice engine.

The Synthesia 2026 AI in L&D Report f2026 AI in L&D Report found that the fastest-growing areas of planned AI adoption aren’t in content creation — they’re in assessments and simulations (36%), adaptive pathways (33%), and AI tutors (29%). In other words: L&D teams are starting to realise that the most powerful use of AI isn’t producing learning materials. It’s creating environments where learners actually practise.

And you can build these right now — no dev team, no custom platform, no code. Each method below includes a prompt you can paste into your preferred AI tool to generate a working interactive prototype: a self-contained practice activity with a briefing screen, a live AI interaction, and a debrief — all running in the browser, ready to share with stakeholders or deploy to learners.

OpenAI Adds Interactive Math and Science Learning Tools to ChatGPT — from campustechnology.com by Rhea Kelly

Key Takeaways

  • ChatGPT adds interactive learning tools: OpenAI introduced interactive math and science visualizations that allow users to explore formulas, variables, and relationships in real time.
  • The tool currently covers over 70 core math and science topics and is aimed initially at high school and college-level learners.
  • Users can adjust variables, manipulate formulas, and immediately see how changes affect graphs and outcomes.
 

U.S. Department of Labor Defines 5 Key Areas of AI Literacy — from campustechnology.com by Rhea Kelly

Key Takeaways

  • Department of Labor releases AI Literacy Framework: The framework defines AI literacy as competencies for using and evaluating AI responsibly, with a primary focus on generative AI in the workplace.
  • Framework outlines five core AI literacy areas: These include understanding AI principles, exploring real-world uses, directing AI effectively, evaluating AI outputs, and using AI responsibly.
  • Guidance for workforce and education systems: The framework also provides training principles and recommendations for workers, employers, education providers, and government agencies to expand AI education and training.
 

The Rungs of the Career Ladder We Removed — from by Dr. Michelle Weise
On the slow, quiet disappearance of learning HOW to work

There used to be a time when starting a job meant being a little lost. You sat in on meetings you didn’t run. You watched someone else handle the difficult client, draft the tricky email, navigate the room when the room shifted. You made your first draft of something, and someone returned it bleeding red ink. And somehow — through the mess and the margin notes — you learned.

That time is vanishing.

In just the first seven months of 2025, generative AI adoption was linked to thousands of job cuts. But the headline number misses the quieter, more consequential story: it’s not just fewer jobs. It’s the disappearance of the work that teaches you how to work.

So here’s the uncomfortable question: if genAI is absorbing the entry-level doing, where does that formation happen now?

We have to answer that. Not theoretically. Practically. Because the ladder hasn’t disappeared — but we’ve removed the bottom rungs. And no employer is going to drop a newly minted graduate into a mid-career role and hope they figure it out.

 

Cinematic Prompting Without IP — from heatherbcooper.substack.com by Heather Cooper
Stop saying “Blade Runner” style.

Beginner Prompt Structure
If you’re new to prompting, start with this framework:
[Subject] + [Description] + [Setting] + [Lighting] + [Style/Medium]

The advanced framework adds three layers:
[Lens] + [Subject + Action] + [Environment + Atmosphere] + [Lighting + Colour] + [Mood/Emotion] + [Technical Detail]

 

Teach Smarter with AI — from wondertools.substack.com by Jeremy Caplan and Lance Eaton
10 tested strategies from two educators who actually use them

I recently talked with Lance Eaton, Senior Associate Director of AI and Teaching & Learning at Northeastern University and writer of AI + Education = Simplified. We traded ideas about what’s actually working. We came up with 10 specific, practical ways anyone who teaches, coaches, or leads can put AI to work.

Watch the full conversation above, or read highlights below.


Beyond Audio Summaries: How to Use NotebookLM to *Actually* Design Better Learning — from drphilippahardman.substack.com by Dr. Philippa Hardman
Five methods to maximise the value of NotebookLM’s features

In practice, what makes NotebookLM different for learning designers is four things:

  • Answers grounded in your sources (with citations):
  • Source toggling:
  • Multi-format studio & multi-source summaries:
  • Persistent workspace:


5 Evidence-Based Methods NotebookLM Operationalises…


Shadow AI Isn’t a Threat: It’s a Signal — from campustechnology.com by Damien Eversmann
Unofficial AI use on campus reveals more about institutional gaps than misbehavior.

Key Takeaways

  • Shadow AI is widespread in higher education: Faculty, researchers, students, and staff are using AI tools outside official IT channels, including consumer platforms and public cloud services that may involve sensitive data.
  • Unauthorized AI use creates data, compliance, and cost risks: Consumer AI tools may store or reuse user data, while uncoordinated adoption drives redundant licenses, unpredictable cloud costs, and weaker security oversight.
  • Institutions are shifting from restriction to enablement: Some campuses are making approved paths easier by offering ready-to-use research environments, campus-managed AI tools, clear guidance on data and vendors, and streamlined approval processes.

How L&D Can Lead in the Age of AI Even If Your Company’s Not Ready — from learningguild.com

How to lead even when your company doesn’t allow AI
Even if your corporation isn’t ready for AI, you can still research tools personally to stay ahead of the curve, so when organizational restrictions lift, you are ready to use AI for learning right away. Here are some tools you can test at home if they’re restricted in your workplace:

  • Content generation – Start testing text-based tools to get a taste of how AI can accelerate content creation. Then take it to the next level by exploring tools that generate voices, music, and sound effects.
  • AI coaching tools – Have AI pose as a customer co-worker or customer to get a taste of what it’s like to use it as a conversation coach. Next, use the voice and video capabilities in an app like ChatGPT to explore how AI can coach someone through tasks.
  • In-the-flow learning assistants – Test turning documents into a conversational avatar and interacting with it to see how it feels. Then think about how the technology could potentially transform static content into dynamic learning experiences for employees.
  • Vibe-coded simulations – Experiment with this technology by creating a simple, fun game. Afterwards, brainstorm some ideas on how it could quickly create simulations for your learners in the future.

The Higher Ed Playbook for AI Affordability — from campustechnology.com by Jason Dunn-Potter

Key Takeaways

  • Affordable AI adoption focuses on evolving existing systems: Universities are embedding AI into current devices, workflows, and legacy systems rather than rebuilding infrastructure or investing in new data centers.
  • Edge AI reduces costs and improves access: Running AI models on local devices or networks lowers cloud processing costs, enhances security, and supports learning use cases such as tutoring, translation, transcription, and adaptive learning.
  • Enterprise integration and governance drive impact: Institutions are applying AI across admissions, advising, facilities, and research workflows, supported by shared resource hubs, data governance, AI literacy, and outcome-driven implementation.
 
 

“But what’s happening right now is exponential.” — from linkedin.com by Josh Cavalier

Excerpt:

I need to be honest with you. I’ve been running experiments this week with Claude Code and Opus 4.6, and we have reached the precipice in the collapse of time required to produce high-quality text-based ID outputs.

This includes performance consulting reports, learning needs analyses, action mapping, scripts, storyboards, facilitator guides, rubrics, and technical specs.

I just mapped the entire performance consulting process into a multimodal AI integration architecture (diagram image). Every phase. Entry and contracting. Performance analysis. Cause analysis. Solution design. Implementation. Evaluation. Thirty files. System specifications for each. The next step is to vet out each “skill” with an expert performance consultant.

Then I attempted a learning output: an 8-module course built with a cognitive scaffold that moves beyond content delivery to facilitate deliberate practice, meaning-making, and guided reflection within the learner’s own context.

The result:



AI and human-centered learning — from linkedin.com by Patrick Blessinger

Democratizing opportunities

AI adaptive learning can adapt learning in real-time. These tools have the potential to provide a more personalized learning experience, but only if used properly.

The California State University system uses ChatGPT Edu (OpenAI, 2025). Students use it for AI-assisted tutoring, study aids, and writing support. These resources provide 24/7 availability of subject-matter expertise tailored to students’ learning needs. It is not a replacement for professors. Rather, it extends the reach of mentorship by reducing access barriers.

However, we must proceed with intellectual humility and ethical responsibility. Even though AI can customize messages, it cannot replace the encouragement of a teacher or professor, or the social and emotional aspects of learning. It’s at the intersection of humanistic values and knowledge development that education must find its balance.

 

Something Big Is Happening — from shumer.dev by Matt Shumer; see below from the BIG Questions Institute, where I got this article from

I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.


They’ve now done it. And they’re moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do”, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I’ve seen in just the last couple of months, I think “less” is more likely.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is “really getting better” or “hitting a wall” — which has been going on for over a year — is over. It’s done. Anyone still making that argument either hasn’t used the current models, has an incentive to downplay what’s happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don’t say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous… because it’s preventing people from preparing.


What “Something Big Is Happening” Means for Schools — from/by the BIG Questions Institute
Matt Shumer’s newsletter post Something Big is Happening has been read over 80 million times within the week when it was published, on February 9.

Still, it’s worth reading Shumer’s post. Given the claims and warnings in Something Big Is Happening (and countless other articles), how would you truly, honestly respond to these questions:

  • What will the purpose of school be in 5 years?
  • What are we doing now that we must leave behind right away?
  • What can we leave behind gradually?
  • What does rigor look like in this AI-powered world?
  • Does our strategy look like making adjustments at the margins or are we preparing our students for a fundamental shift?
  • What is our definition of success? How do the the implications of AI and jobs (and other important forces, from geopolitical shifts and climate change, to mental health needs and shifting generational values) impact the outcomes we prioritize? What is the story of success we want to pass on to our students and wider community?
 

Jim VandeHei’s note to his kids: Blunt AI talk — from axios.com by CEO Jim VandeHei
Axios CEO Jim VandeHei wrote this note to his wife, Autumn, and their three kids. She suggested sharing it more broadly since so many families are wrestling with how to think and talk about AI. So here it is …

Dear Family:
I want to put to words what I’m hearing, seeing, thinking and writing about AI.

  • Simply put, I’m now certain it will upend your work and life in ways more profound than the internet or possibly electricity. This will hit in months, not years.
  • The changes will be fast, wide, radical, disorienting and scary. No one will avoid its reach.

I’m not trying to frighten you. And I know your opinions range from wonderment to worry. That’s natural and OK. Our species isn’t wired for change of this speed or scale.

  • My conversations with the CEOs and builders of these LLMs, as well as my own deep experimentation with AI, have shaken and stirred me in ways I never imagined.

All of you must figure out how to master AI for any specific job or internship you hold or take. You’d be jeopardizing your future careers by not figuring out how to use AI to amplify and improve your work. You’d be wise to replace social media scrolling with LLM testing.

Be the very best at using AI for your gig.

more here.


Also see:


Also relevant/see:

 
© 2025 | Daniel Christian