AI and the Law: What Educators Need to Know About Responsible Use in a Rapidly Changing Landscape — from rdene915.com by Dr. Rachelle Dené Poth, JD

As both an attorney and educator who has spent more than eight years researching, teaching, presenting, and writing about AI, I have worked with schools across K–12 and higher education that are navigating these exact questions. The legal implications of AI are not barriers to innovation, but I consider them to serve as guardrails that assist schools with adopting technology responsibly. The key is protecting students, educators, and institutions and staying informed. Understanding the legal landscape and any potential legal implications as a result of the use of AI in classrooms helps schools move forward with confidence rather than hesitation.

Sections of Rachelle’s posting include:

  • Why AI and the Law Matter in Education
  • Key Laws That Shape AI Use in Schools
  • Data Privacy and Vendor Responsibility
  • Transparency Builds Trust With Students and Families
  • Accessibility, Equity, and Emerging Legal Considerations
  • Teaching Digital Citizenship With AI Literacy
  • Supporting Schools and Organizations Through AI and Legal Guidance
  • Moving Forward With Confidence
 

Law Firm AI Adoption: So Many Choices — from abovethelaw.com by Stephen Embry
Firms need to recognize reality, define what their legal professionals need, and then determine how to adopt and govern the use of AI tools.

It’s tough to be a law firm managing partner in the age of AI. So many choices, so little time. It’s like the proverbial kid in the candy store who has so many choices that they either can’t pick out anything or reach for too much. We see evidence of the first option in 8am’s recent outstanding Legal Industry Report, authored by Niki Black.

8am’s Legal Industry Report
One thing that stood out in the report was the discrepancy between use of AI by individual legal professionals and what firms are doing when it comes to AI adoption and guidance.  Almost 75% of those who responded said they were using general purpose AI tools like ChatGPT and Claude for work purposes. That’s pretty significant.


Legalweek: It’s time to re-engineer how legal work is delivered — from legaltechnology.com by Caroline Hill

AI for good
While focusing on the risks of AI going wrong, it is only fair to mention the conversations I had around using AI for good.  Two in particular stand out.

The first is the news from Everlaw that its Everlaw for Good Program has, over the past year, supported more than 675 active cases across 235 organisations, and expanded its support to a growing network of non-profit organisations.

The program extends Everlaw’s technology to organisations working to advance access to justice. In a recent survey by Everlaw, 88% of legal aid professionals said they are optimistic about AI’s potential to help narrow the justice gap.

“Mission-driven organizations are increasingly handling complex investigations and litigation with limited resources,” said Joanne Sprague, head of Everlaw for Good. “Expanding access to powerful, easy-to-use technology helps level the playing field so these teams can uncover critical evidence, take on more complex matters, and yield stronger results for the communities they serve.”


LawNext on Location: Visiting Everlaw’s Headquarters For A Conversation with AJ Shankar, Founder and CEO — from lawnext.com by Bob Ambrogi

The bulk of our conversation focuses on generative AI, and how Everlaw has approached it differently than much of the market. Rather than bolting on a chatbot, AJ says, Everlaw embedded AI deliberately throughout the platform — document summarization, coding suggestions, deposition analysis, fact extraction — always grounding responses in the actual documents at hand and citing sources so users can verify the work. The December launch of Deep Dive, which lets litigators pose a question and get a synthesized, cited answer drawn from an entire document corpus in about a minute, is the feature AJ calls a “new era” for discovery — one he genuinely believes represents a categorical shift.

 

2026 Survey of College and University Presidents — from insidehighered.com, Liaison, & Jenzabar
Download and explore exclusive insights from the 2026 Survey of College and University Presidents to see how these campus leaders are responding to financial volatility, political interference, rapid advances in AI, and where they believe the biggest risks and opportunities lie as they look toward 2030.

In this year’s survey, presidents share perspectives on:

  • How presidents assess the second Trump administration’s impact on higher education
  • Which emerging or evolving educational models they plan to add or expand in the coming years
  • How effective they believe higher education has been in shaping national conversations arout AI
  • The issues presidents expect will have the greatest impact on higher education by 2030

 

 

U.S. Department of Labor Defines 5 Key Areas of AI Literacy — from campustechnology.com by Rhea Kelly

Key Takeaways

  • Department of Labor releases AI Literacy Framework: The framework defines AI literacy as competencies for using and evaluating AI responsibly, with a primary focus on generative AI in the workplace.
  • Framework outlines five core AI literacy areas: These include understanding AI principles, exploring real-world uses, directing AI effectively, evaluating AI outputs, and using AI responsibly.
  • Guidance for workforce and education systems: The framework also provides training principles and recommendations for workers, employers, education providers, and government agencies to expand AI education and training.
 

The Campus AI Crisis — by Jeffrey Selingo; via Ryan Craig
Young graduates can’t find jobs. Colleges know they have to do something. But what?

Only now are colleges realizing that the implications of AI are much greater and are already outrunning their institutional ability to respond. As schools struggle to update their curricula and classroom policies, they also confront a deeper problem: the suddenly enormous gap between what they say a degree is for and what the labor market now demands. In that mismatch, students are left to absorb the risk. Alina McMahon and millions of other Gen-Zers like her are caught in a muddled in-between moment: colleges only just beginning to think about how to adapt and redefine their mission in the post-AI world, and a job market that’s changing much, much faster.

“Colleges and universities face an existential issue before them,” said Ryan Craig, author of Apprentice Nation and managing director of a firm that invests in new educational models. “They need to figure out how to integrate relevant, in-field, and hopefully paid work experience for every student, and hopefully multiple experiences before they graduate.”

 

Major Changes Reshape Law Schools Nationwide in 2026 — from jdjournal.com by Ma Fatima

Law schools across the United States are entering one of the most transformative periods in recent memory. In 2026, legal education is being reshaped by leadership turnover, shifting accreditation standards, changes to student loan policies, and the introduction of a redesigned bar exam. Together, these developments are forcing law schools to rethink how they educate students and prepare future lawyers for a rapidly evolving legal profession.

Also from jdjournal.com, see:

  • Healthcare Industry Legal Careers: High-Growth Roles and Paths — from jdjournal.com by Ma Fatima
    The healthcare industry is rapidly emerging as one of the most promising and resilient sectors for legal professionals, driven by expanding regulations, technological innovation, and an increasingly complex healthcare delivery system. As hospitals, life sciences companies, insurers, and digital health platforms navigate constant regulatory change, demand for experienced legal talent continues to rise.
 

Disrupting the first reported AI-orchestrated cyber espionage campaign — from Anthropic

Executive summary
We have developed sophisticated safety and security measures to prevent the misuse of our AI models. While these measures are generally effective, cybercriminals and other malicious actors continually attempt to find ways around them. This report details a recent threat campaign we identified and disrupted, along with the steps we’ve taken to detect and counter this type of abuse. This represents the work of Threat Intelligence: a dedicated team at Anthropic that investigates real world cases of misuse and works within our Safeguards organization to improve our defenses against such cases.

In mid-September 2025, we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI. Our investigation revealed a well-resourced, professionally coordinated operation involving multiple simultaneous targeted intrusions. The operation targeted roughly 30 entities and our investigation validated a handful of successful intrusions.

This campaign demonstrated unprecedented integration and autonomy of AI throughout the attack lifecycle, with the threat actor manipulating Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously. The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

From DSC:
The above item was from The Rundown AI, who wrote the following:

The Rundown: Anthropic thwarted what it believes is the first AI-driven cyber espionage campaign, after attackers were able to manipulate Claude Code to infiltrate dozens of organizations, with the model executing 80-90% of the attack autonomously.

The details:

  • The September 2025 operation targeted roughly 30 tech firms, financial institutions, chemical manufacturers, and government agencies.
  • The threat was assessed with ‘high confidence’ to be a Chinese state-sponsored group, using AI’s agentic abilities to an “unprecedented degree.”
  • Attackers tricked Claude by splitting malicious tasks into smaller, innocent-looking requests, claiming to be security researchers pushing authorized tests.
  • The attacks mark a major step up from Anthropic’s “vibe hacking” findings in June, now requiring minimal human oversight beyond strategic approval.

Why it matters: Anthropic calls this the “first documented case of a large-scale cyberattack executed without substantial human intervention”, and AI’s agentic abilities are creating threats that move and scale faster than ever. While AI capabilities can also help prevent them, security for organizations worldwide likely needs a major overhaul.


Also see:

Disrupting the first reported AI-orchestrated cyber espionage campaign — from anthropic.com via The AI Valley

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

Chinese Hackers Used AI to Run a Massive Cyberattack on Autopilot (And It Actually Worked) — from theneurondaily.com

Why this matters: The barrier to launching sophisticated cyberattacks just dropped dramatically. What used to require entire teams of experienced hackers can now be done by less-skilled groups with the right AI setup.

This is a fundamental shift. Over the next 6-12 months, expect security teams everywhere to start deploying AI for defense—automation, threat detection, vulnerability scanning at a more elevated level. The companies that don’t adapt will be sitting ducks to get overwhelmed by similar tricks.

If your company handles sensitive data, now’s the time to ask your IT team what AI-powered defenses you have in place. Because if the attackers are using AI agents, you’d better believe your defenders need them too…

 

The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

Resilient by Design: The Future of America’s Community Colleges — from aacc.nche.edu

This report highlights several truths:

  • Leadership capacity must expand. Presidents and leaders are now expected to be fundraisers, policy navigators, cultural change agents, and data-informed strategists. Leadership can no longer be about a single individual—it must be a team sport. AACC is charged with helping you and your teams build these capacities through leadership academies, peer learning communities, and practical toolkits.
  • The strength of our network is our greatest asset. No college faces its challenges alone, because within our membership there are leaders who have already innovated, stumbled, and succeeded. Resilient by Design urges AACC to serve as the connector and amplifier of this collective wisdom, developing playbooks and scaling proven practices in areas from guided pathways to artificial intelligence to workforce partnerships.
  • Innovation in models and tools is urgent. Budgets must be strategic, business models must be reimagined, and ROI must be proven—not only to funders and policymakers, but to the students and communities we serve. Community colleges must claim their role as engines of economic vitality and social mobility, advancing both immediate workforce needs and long-term wealth-building for students.
  • Policy engagement must be deepened. Federal advocacy remains essential, but the daily realities of our institutions are shaped by state and regional policy. AACC will increasingly support members with state-level resources, legislative templates, and partnerships that equip you to advocate effectively in your unique contexts.
  • Employer engagement must become transformational. Students deserve not just degrees, but careers. The report challenges us to create career-connected colleges where employers co-design curricula, offer meaningful work-based learning, and help ensure graduates are not just prepared for today’s jobs but resilient for tomorrow’s.
 

70% of Americans say feds shouldn’t control admissions, curriculum — from highereddive.com by Natalie Schwartz
The Public Religion Research Institute poll comes as the Trump administration is pressuring colleges to change their policies.

Dive Brief: 

  • Most polled Americans, 70%, disagreed that the federal government should control “admissions, faculty hiring, and curriculum at U.S. colleges and universities to ensure they do not teach inappropriate material,” according to a survey released Wednesday by the Public Religion Research Institute.
  • The majority of Americans across political parties — 84% of Democrats, 75% of independents and 58% of Republicans — disagreed with federal control over these elements of college operations.
  • The poll’s results come as the Trump administration seeks to exert control over college workings, including in its recent offer of priority for federal research funding in exchange for making sweeping policy changes aligned with the government’s priorities.

Also see:

 

2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

Supreme Court Allows Trump to Slash Foreign Aid — from nytimes.com by Ann E. Marimow
The court’s conservative majority allowed the president to cut the funding in part because it said his flexibility to engage in foreign affairs outweighed “the potential harm” faced by aid recipients.

The Supreme Court on Friday allowed the Trump administration to withhold $4 billion in foreign aid that had been appropriated by Congress, in a preliminary test of President Trump’s efforts to wrest the power of the purse from lawmakers.

“The stakes are high: At issue is the allocation of power between the executive and Congress” over how government funds are spent, wrote Justice Elena Kagan, who was joined by Justices Sonia Sotomayor and Ketanji Brown Jackson.

“This result further erodes separation of powers principles that are fundamental to our constitutional order,” Nicolas Sansone, a lawyer with the Public Citizen Litigation Group who represents the coalition, said in a statement. “It will also have a grave humanitarian impact on vulnerable communities throughout the world.”


From DSC:
Do your friggin’ job Supreme Court justices! Your job is to uphold the Constitution and the laws of the United States of America! As you fully well know, it is the Legislative Branch (Congress) that allocates funding — not the Executive Branch.

And there will be horrible humanitarian impacts that are going to be felt in many places because this funding is being withheld.

Making America Great Again…NOT!!!


 

Agentic AI and the New Era of Corporate Learning for 2026 — from hrmorning.com by Carol Warner

That gap creates compliance risk and wasted investment. It leaves HR leaders with a critical question: How do you measure and validate real learning when AI is doing the work for employees?

Designing Training That AI Can’t Fake
Employees often find static slide decks and multiple-choice quizzes tedious, while AI can breeze through them. If employees would rather let AI take training for them, it’s a red flag about the content itself.

One of the biggest risks with agentic AI is disengagement. When AI can complete a task for employees, their incentive to engage disappears unless they understand why the skill matters, Rashid explains. Personalization and context are critical. Training should clearly connect to what employees value most – career mobility, advancement, and staying relevant in a fast-changing market.

Nearly half of executives believe today’s skills will expire within two years, making continuous learning essential for job security and growth. To make training engaging, Rashid recommends:

  • Delivering content in formats employees already consume – short videos, mobile-first modules, interactive simulations, or micro-podcasts that fit naturally into workflows. For frontline workers, this might mean replacing traditional desktop training with mobile content that integrates into their workday.
  • Aligning learning with tangible outcomes, like career opportunities or new responsibilities.
  • Layering in recognition, such as digital badges, leaderboards, or team shout-outs, to reinforce motivation and progress

Microsoft 365 Copilot AI agents reach a new milestone — is teamwork about to change? — from windowscentral.comby Adam Hales
Microsoft expands Copilot with collaborative agents in Teams, SharePoint and more to boost productivity and reshape teamwork.

Microsoft is pitching a recent shift of AI agents in Microsoft Teams as more than just smarter assistance. Instead, these agents are built to behave like human teammates inside familiar apps such as Teams, SharePoint, and Viva Engage. They can set up meeting agendas, keep files in order, and even step in to guide community discussions when things drift off track.

Unlike tools such as ChatGPT or Claude, which mostly wait for prompts, Microsoft’s agents are designed to take initiative. They can chase up unfinished work, highlight items that still need decisions, and keep projects moving forward. By drawing on Microsoft Graph, they also bring in the right files, past decisions, and context to make their suggestions more useful.



Chris Dede’s comments on LinkedIn re: Aibrary

As an advisor to Aibrary, I am impressed with their educational philosophy, which is based both on theory and on empirical research findings. Aibrary is an innovative approach to self-directed learning that complements academic resources. Expanding our historic conceptions of books, libraries, and lifelong learning to new models enabled by emerging technologies is central to empowering all of us to shape our future.
.

Also see:

Aibrary.ai


Why AI literacy must come before policy — from timeshighereducation.com by Kathryn MacCallum and David Parsons
When developing rules and guidelines around the uses of artificial intelligence, the first question to ask is whether the university policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set

Literacy first, guidelines second, policy third
For students to respond appropriately to policies, they need to be given supportive guidelines that enact these policies. Further, to apply these guidelines, they need a level of AI literacy that gives them the knowledge, skills and understanding required to support responsible use of AI. Therefore, if we want AI to enhance education rather than undermine it, we must build literacy first, then create supportive guidelines. Good policy can then follow.


AI training becomes mandatory at more US law schools — from reuters.com by Karen Sloan and Sara Merken

Sept 22 (Reuters) – At orientation last month, 375 new Fordham Law students were handed two summaries of rapper Drake’s defamation lawsuit against his rival Kendrick Lamar’s record label — one written by a law professor, the other by ChatGPT.

The students guessed which was which, then dissected the artificial intelligence chatbot’s version for accuracy and nuance, finding that it included some irrelevant facts.

The exercise was part of the first-ever AI session for incoming students at the Manhattan law school, one of at least eight law schools now incorporating AI training for first-year students in orientation, legal research and writing courses, or through mandatory standalone classes.

 

Digital Accessibility with Amy Lomellini — from intentionalteaching.buzzsprout.com by Derek Bruff

In this episode, we explore why digital accessibility can be so important to the student experience. My guest is Amy Lomellini, director of accessibility at Anthology, the company that makes the learning management system Blackboard. Amy teaches educational technology as an adjunct at Boise State University, and she facilitates courses on digital accessibility for the Online Learning Consortium. In our conversation, we talk about the importance of digital accessibility to students, moving away from the traditional disclosure-accommodation paradigm, AI as an assistive technology, and lots more.

 
© 2025 | Daniel Christian