The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 


From DSC:
One of my sisters shared this piece with me. She is very concerned about our society’s use of technology — whether it relates to our youth’s use of social media or the relentless pressure to be first in all things AI. As she was a teacher (at the middle school level) for 37 years, I greatly appreciate her viewpoints. She keeps me grounded in some of the negatives of technology. It’s important for us to listen to each other.


 

Resilient by Design: The Future of America’s Community Colleges — from aacc.nche.edu

This report highlights several truths:

  • Leadership capacity must expand. Presidents and leaders are now expected to be fundraisers, policy navigators, cultural change agents, and data-informed strategists. Leadership can no longer be about a single individual—it must be a team sport. AACC is charged with helping you and your teams build these capacities through leadership academies, peer learning communities, and practical toolkits.
  • The strength of our network is our greatest asset. No college faces its challenges alone, because within our membership there are leaders who have already innovated, stumbled, and succeeded. Resilient by Design urges AACC to serve as the connector and amplifier of this collective wisdom, developing playbooks and scaling proven practices in areas from guided pathways to artificial intelligence to workforce partnerships.
  • Innovation in models and tools is urgent. Budgets must be strategic, business models must be reimagined, and ROI must be proven—not only to funders and policymakers, but to the students and communities we serve. Community colleges must claim their role as engines of economic vitality and social mobility, advancing both immediate workforce needs and long-term wealth-building for students.
  • Policy engagement must be deepened. Federal advocacy remains essential, but the daily realities of our institutions are shaped by state and regional policy. AACC will increasingly support members with state-level resources, legislative templates, and partnerships that equip you to advocate effectively in your unique contexts.
  • Employer engagement must become transformational. Students deserve not just degrees, but careers. The report challenges us to create career-connected colleges where employers co-design curricula, offer meaningful work-based learning, and help ensure graduates are not just prepared for today’s jobs but resilient for tomorrow’s.
 

70% of Americans say feds shouldn’t control admissions, curriculum — from highereddive.com by Natalie Schwartz
The Public Religion Research Institute poll comes as the Trump administration is pressuring colleges to change their policies.

Dive Brief: 

  • Most polled Americans, 70%, disagreed that the federal government should control “admissions, faculty hiring, and curriculum at U.S. colleges and universities to ensure they do not teach inappropriate material,” according to a survey released Wednesday by the Public Religion Research Institute.
  • The majority of Americans across political parties — 84% of Democrats, 75% of independents and 58% of Republicans — disagreed with federal control over these elements of college operations.
  • The poll’s results come as the Trump administration seeks to exert control over college workings, including in its recent offer of priority for federal research funding in exchange for making sweeping policy changes aligned with the government’s priorities.

Also see:

 

2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

Supreme Court Allows Trump to Slash Foreign Aid — from nytimes.com by Ann E. Marimow
The court’s conservative majority allowed the president to cut the funding in part because it said his flexibility to engage in foreign affairs outweighed “the potential harm” faced by aid recipients.

The Supreme Court on Friday allowed the Trump administration to withhold $4 billion in foreign aid that had been appropriated by Congress, in a preliminary test of President Trump’s efforts to wrest the power of the purse from lawmakers.

“The stakes are high: At issue is the allocation of power between the executive and Congress” over how government funds are spent, wrote Justice Elena Kagan, who was joined by Justices Sonia Sotomayor and Ketanji Brown Jackson.

“This result further erodes separation of powers principles that are fundamental to our constitutional order,” Nicolas Sansone, a lawyer with the Public Citizen Litigation Group who represents the coalition, said in a statement. “It will also have a grave humanitarian impact on vulnerable communities throughout the world.”


From DSC:
Do your friggin’ job Supreme Court justices! Your job is to uphold the Constitution and the laws of the United States of America! As you fully well know, it is the Legislative Branch (Congress) that allocates funding — not the Executive Branch.

And there will be horrible humanitarian impacts that are going to be felt in many places because this funding is being withheld.

Making America Great Again…NOT!!!


 

Agentic AI and the New Era of Corporate Learning for 2026 — from hrmorning.com by Carol Warner

That gap creates compliance risk and wasted investment. It leaves HR leaders with a critical question: How do you measure and validate real learning when AI is doing the work for employees?

Designing Training That AI Can’t Fake
Employees often find static slide decks and multiple-choice quizzes tedious, while AI can breeze through them. If employees would rather let AI take training for them, it’s a red flag about the content itself.

One of the biggest risks with agentic AI is disengagement. When AI can complete a task for employees, their incentive to engage disappears unless they understand why the skill matters, Rashid explains. Personalization and context are critical. Training should clearly connect to what employees value most – career mobility, advancement, and staying relevant in a fast-changing market.

Nearly half of executives believe today’s skills will expire within two years, making continuous learning essential for job security and growth. To make training engaging, Rashid recommends:

  • Delivering content in formats employees already consume – short videos, mobile-first modules, interactive simulations, or micro-podcasts that fit naturally into workflows. For frontline workers, this might mean replacing traditional desktop training with mobile content that integrates into their workday.
  • Aligning learning with tangible outcomes, like career opportunities or new responsibilities.
  • Layering in recognition, such as digital badges, leaderboards, or team shout-outs, to reinforce motivation and progress

Microsoft 365 Copilot AI agents reach a new milestone — is teamwork about to change? — from windowscentral.comby Adam Hales
Microsoft expands Copilot with collaborative agents in Teams, SharePoint and more to boost productivity and reshape teamwork.

Microsoft is pitching a recent shift of AI agents in Microsoft Teams as more than just smarter assistance. Instead, these agents are built to behave like human teammates inside familiar apps such as Teams, SharePoint, and Viva Engage. They can set up meeting agendas, keep files in order, and even step in to guide community discussions when things drift off track.

Unlike tools such as ChatGPT or Claude, which mostly wait for prompts, Microsoft’s agents are designed to take initiative. They can chase up unfinished work, highlight items that still need decisions, and keep projects moving forward. By drawing on Microsoft Graph, they also bring in the right files, past decisions, and context to make their suggestions more useful.



Chris Dede’s comments on LinkedIn re: Aibrary

As an advisor to Aibrary, I am impressed with their educational philosophy, which is based both on theory and on empirical research findings. Aibrary is an innovative approach to self-directed learning that complements academic resources. Expanding our historic conceptions of books, libraries, and lifelong learning to new models enabled by emerging technologies is central to empowering all of us to shape our future.
.

Also see:

Aibrary.ai


Why AI literacy must come before policy — from timeshighereducation.com by Kathryn MacCallum and David Parsons
When developing rules and guidelines around the uses of artificial intelligence, the first question to ask is whether the university policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set

Literacy first, guidelines second, policy third
For students to respond appropriately to policies, they need to be given supportive guidelines that enact these policies. Further, to apply these guidelines, they need a level of AI literacy that gives them the knowledge, skills and understanding required to support responsible use of AI. Therefore, if we want AI to enhance education rather than undermine it, we must build literacy first, then create supportive guidelines. Good policy can then follow.


AI training becomes mandatory at more US law schools — from reuters.com by Karen Sloan and Sara Merken

Sept 22 (Reuters) – At orientation last month, 375 new Fordham Law students were handed two summaries of rapper Drake’s defamation lawsuit against his rival Kendrick Lamar’s record label — one written by a law professor, the other by ChatGPT.

The students guessed which was which, then dissected the artificial intelligence chatbot’s version for accuracy and nuance, finding that it included some irrelevant facts.

The exercise was part of the first-ever AI session for incoming students at the Manhattan law school, one of at least eight law schools now incorporating AI training for first-year students in orientation, legal research and writing courses, or through mandatory standalone classes.

 

Digital Accessibility with Amy Lomellini — from intentionalteaching.buzzsprout.com by Derek Bruff

In this episode, we explore why digital accessibility can be so important to the student experience. My guest is Amy Lomellini, director of accessibility at Anthology, the company that makes the learning management system Blackboard. Amy teaches educational technology as an adjunct at Boise State University, and she facilitates courses on digital accessibility for the Online Learning Consortium. In our conversation, we talk about the importance of digital accessibility to students, moving away from the traditional disclosure-accommodation paradigm, AI as an assistive technology, and lots more.

 

The 2025 Changing Landscape of Online Education (CHLOE) 10 Report — from qualitymatters.org; emphasis below from DSC

Notable findings from the 73-page report include: 

  • Online Interest Surges Across Student Populations: 
  • Institutional Preparedness Falters Amid Rising Demand: Despite accelerating demand, institutional readiness has stagnated—or regressed—in key areas.
  • The Online Education Marketplace Is Increasingly Competitive: …
  • Alternative Credentials Take Center Stage: …
  • AI Integration Lacks Strategic Coordination: …

Just 28% of faculty are considered fully prepared for online course design, and 45% for teaching. Alarmingly, only 28% of institutions report having fully developed academic continuity plans for future emergency pivots to online.


Also relevant, see:


Great Expectations, Fragile Foundations — from onedtech.philhillaa.com by Glenda Morgan
Lessons about growth from the CHLOE & BOnES reports

Cultural resistance remains strong. Many [Chief Online Learning Officers] COLOs say faculty and deans still believe in-person learning is “just better,” creating headwinds even for modest online growth. As one respondent at a four-year institution with a large online presence put it:

Supportive departments [that] see the value in online may have very different levels of responsiveness compared to academic departments [that] are begrudgingly online. There is definitely a growing belief that students “should” be on-ground and are only choosing online because it’s easy/ convenient. Never mind the very real and growing population of nontraditional learners who can only take online classes, and the very real and growing population of traditional-aged learners who prefer online classes; many faculty/deans take a paternalistic, “we know what’s best” approach.


Ultimately, what we need is not just more ambition but better ambition. Ambition rooted in a realistic understanding of institutional capacity, a shared strategic vision, investments in policy and infrastructure, and a culture that supports online learning as a core part of the academic mission, not an auxiliary one. It’s time we talked about what it really takes to grow online learning , and where ambition needs to be matched by structure.

From DSC:
Yup. Culture is at the breakfast table again…boy, those strategies taste good.

I’d like to take some of this report — like the graphic below — and share it with former faculty members and members of a couple of my past job families’ leadership. They strongly didn’t agree with us when we tried to advocate for the development of online-based learning/programs at our organizations…but we were right. We were right all along. And we were LEADING all along. No doubt about it — even if the leadership at the time said that we weren’t leading.

The cultures of those organizations hurt us at the time. But our cultivating work eventually led to the development of online programs — unfortunately, after our groups were disbanded, they had to outsource those programs to OPMs.


Arizona State University — with its dramatic growth in online-based enrollments.

 

How to build trust in digital credentials? We offer some insights — from eddesignlab.org

Three key insights emerge:

Trust requires simplicity. Users need clear visual indicators (like the lock icon for secure websites) that their credentials meet technical standards without understanding the complexity behind them.

Standards matter for real people. Technical compliance with open standards directly impacts whether someone can access their own achievements and share them where needed.

Human experience drives adoption. Even the most sophisticated technical infrastructure fails if people can’t easily understand and use it.

Our insights and recommendations provide a roadmap for building credential systems that serve real human needs while meeting rigorous technical requirements.

 
 

Thomson Reuters CEO: Legal Profession Faces “Biggest Disruption in Its History”from AI  — from lawnext.com by Bob Ambrogi

Thomson Reuters President and CEO Steve Hasker believes the legal profession is experiencing “the biggest disruption … in its history” due to generative and agentic artificial intelligence, fundamentally rewriting how legal work products are created for the first time in more than 300 years.

Speaking to legal technology reporters during ILTACON, the International Legal Technology Association’s annual conference, Hasker outlined his company’s ambitious goal to become “the most innovative company” in the legal tech sector while navigating what he described as unprecedented technological change affecting a profession that has remained largely unchanged since its origins in London tea houses centuries ago.


Legal tech hackathon challenges students to rethink access to justice — from the Centre for Innovation and Entrepreneurship, Auckland Law School
In a 24-hour sprint, student teams designed innovative tools to make legal and social support more accessible.

The winning team comprised of students of computer science, law, psychology and physics. They developed a privacy-first legal assistant powered by AI that helps people understand their legal rights without needing to navigate dense legal language. 


Teaching How To ‘Think Like a Lawyer’ Revisited — from abovethelaw.com by Stephen Embry
GenAI gives the concept of training law students to think like a lawyer a whole new meaning.

Law Schools
These insights have particular urgency for legal education. Indeed, most of Cowen’s criticisms and suggested changes need to be front and center for law school leaders. It’s naïve to think that law student and lawyers aren’t going to use GenAI tools in virtually every aspect of their professional and personal lives. Rather than avoiding the subject or worse yet trying to stop use of these tools, law schools should make GenAI tools a fundamental part of research, writing and drafting training.

They need to focus not on memorization but on the critical thinking skills beginning lawyers used to get in the on-the-job training guild type system. As I discussed, that training came from repetitive and often tedious work that developed experienced lawyers who could recognize patterns and solutions based on the exposure to similar situations. But much of that repetitive and tedious work may go away in a GenAI world.

The Role of Adjunct Professors
But to do this, law schools need to better partner with actual practicing lawyers who can serve as adjunct professors. Law schools need to do away with the notion that adjuncts are second-class teachers.


It’s a New Dawn In Legal Tech: From Woodstock to ILTACON (And Beyond) — from lawnext.com by Bob Ambrogi

As someone who has covered legal tech for 30 years, I cannot remember there ever being a time such as this, when the energy and excitement are raging, driven by generative AI and a new era of innovation and new ideas of the possible.

But this year was different. Wandering the exhibit hall, getting product briefings from vendors, talking with attendees, it was impossible to ignore the fundamental shift happening in legal technology. Gen AI isn’t just creating new products – it is spawning entirely new categories of products that truly are reshaping how legal work gets done.

Agentic AI is the buzzword of 2025 and agentic systems were everywhere at ILTACON, promising to streamline workflows across all areas of legal practice. But, perhaps more importantly, these tools are also beginning to address the business side of running a law practice – from client intake and billing to marketing and practice management. The scope of transformation is now beginning to extend beyond the practice of law into the business of law.

Largely missing from this gathering were solo practitioners, small firm lawyers, legal aid organizations, and access-to-justice advocates – the very people who stand to benefit most from the democratizing potential of AI.

However, now more than ever, the innovations we are seeing in legal tech have the power to level the playing field, to give smaller practitioners access to tools and capabilities that were once prohibitively expensive. If these technologies remain priced for and marketed primarily to Big Law, we will have succeeded only in widening the justice gap rather than closing it.


How AI is Transforming Deposition Review: A LegalTech Q&A — from jdsupra.com

Thanks to breakthroughs in artificial intelligence – particularly in semantic search, multimodal models, and natural language processing – new legaltech solutions are emerging to streamline and accelerate deposition review. What once took hours or days of manual analysis now can be accomplished in minutes, with greater accuracy and efficiency than possible with manual review.


From Skepticism to Trust: A Playbook for AI Change Management in Law Firms — from jdsupra.com by Scott Cohen

Historically, lawyers have been slow adopters of emerging technologies, and with good reason. Legal work is high stakes, deeply rooted in precedent, and built on individual judgment. AI, especially the new generation of agentic AI (systems that not only generate output but initiate tasks, make decisions, and operate semi-autonomously), represents a fundamental shift in how legal work gets done. This shift naturally leads to caution as it challenges long-held assumptions about lawyer workflows and several aspects of their role in the legal process.

The path forward is not to push harder or faster, but smarter. Firms need to take a structured approach that builds trust through transparency, context, training, and measurement of success. This article provides a five-part playbook for law firm leaders navigating AI change management, especially in environments where skepticism is high and reputational risk is even higher.


ILTACON 2025: The vendor briefings – Agents, ecosystems and the next stage of maturity — from legaltechnology.com by Caroline Hill

This year’s ILTACON in Washington was heavy on AI, but the conversation with vendors has shifted. Legal IT Insider’s briefings weren’t about potential use cases or speculative roadmaps. Instead, they focused on how AI is now being embedded into the tools lawyers use every day — and, crucially, how those tools are starting to talk to each other.

Taken together, they point to an inflection point, where agentic workflows, data integration, and open ecosystems define the agenda. But it’s important amidst the latest buzzwords to remember that agents are only as good as the tools they have to work with, and AI only as good as its underlying data. Also, as we talk about autonomous AI, end users are still struggling with cloud implementations and infrastructure challenges, and need vendors to be business partners that help them to make progress at speed.

Harvey’s roadmap is all about expanding its surface area — connecting to systems like iManage, LexisNexis, and more recently publishing giant Wolters Kluwer — so that a lawyer can issue a single query and get synthesised, contextualised answers directly within their workflow. Weinberg said: “What we’re trying to do is get all of the surface area of all of the context that a lawyer needs to complete a task and we’re expanding the product surface so you can enter a search, search all resources, and apply that to the document automatically.” 

The common thread: no one is talking about AI in isolation anymore. It’s about orchestration — pulling together multiple data sources into a workflow that actually reflects how lawyers practice. 


5 Pitfalls Of Delaying Automation In High-Volume Litigation And Claims — from jdsupra.com

Why You Can’t Afford to Wait to Adopt AI Tools that have Plaintiffs Moving Faster than Ever
Just as photocopiers shifted law firm operations in the early 1970s and cloud computing transformed legal document management in the early 2000s, AI automation tools are altering the current legal landscape—enabling litigation teams to instantly structure unstructured data, zero in on key arguments in seconds, and save hundreds (if not thousands) of hours of manual work.


Your Firm’s AI Policy Probably Sucks: Why Law Firms Need Education, Not Rules — from jdsupra.com

The Floor, Not the Ceiling
Smart firms need to flip their entire approach. Instead of dictating which AI tools lawyers must use, leadership should set a floor for acceptable use and then get out of the way.

The floor is simple: no free versions for client work. Free tools are free because users are the product. Client data becomes training data. Confidentiality gets compromised. The firm loses any ability to audit or control how information flows. This isn’t about control; it’s about professional responsibility.

But setting the floor is only the first step. Firms must provide paid, enterprise versions of AI tools that lawyers actually want to use. Not some expensive legal tech platform that promises AI features but delivers complicated workflows. Real AI tools. The same ones lawyers are already using secretly, but with enterprise security, data protection, and proper access controls.

Education must be practical and continuous. Single training sessions don’t work. AI tools evolve weekly. New capabilities emerge constantly. Lawyers need ongoing support to experiment, learn, and share discoveries. This means regular workshops, internal forums for sharing prompts and techniques, and recognition for innovative uses.

The education investment pays off immediately. Lawyers who understand AI use it more effectively. They catch its mistakes. They know when to verify outputs. They develop specialized prompts for legal work. They become force multipliers, not just for themselves but for their entire teams.

 

Is the Legal Profession Ready to Win the AI Race? America’s AI Action Plan Has Fired the Starting Gun — from denniskennedy.com by Dennis Kennedy
The Starting Gun for Legal AI Has Fired. Who in Our Profession is on the Starting Line?

The legal profession’s “wait and see” approach to artificial intelligence is now officially obsolete.

This isn’t hyperbole. This is a direct consequence of the White House’s new Winning the Race: America’s AI Action Plan. …This is the starting gun for a race that will define the next fifty years of our profession, and I’m concerned that most of us aren’t even in the stadium, let alone in the starting blocks.

If the Socratic Method truly means anything, isn’t it time we applied its rigorous questioning to ourselves? We must question our foundational assumptions about the billable hour, the partnership track, our resistance to new forms of legal service delivery, and the very definition of what it means to be “practice-ready” in the 21st century. What do our clients, our students, and users of the legal system need?

The AI Action Plan forces a fundamental re-imagining of our industry’s core jobs.

The New Job of Legal Education: Producing AI-Capable Counsel
The plan’s focus on a “worker-centric approach” is a direct challenge to legal academia. The new job of legal education is no longer just to teach students how to think like a lawyer, but how to perform as an AI-augmented one. This means producing graduates who are not only familiar with the law but are also capable of leveraging AI tools to deliver legal services more efficiently, ethically, and effectively. Even more importnat, it means we must develop lawyers who can give the advice needed to individuals and companies already at work trying to win the AI race.

 

Ben Bernanke and Janet Yellen: The Fed Must Be Independent — an opinion from nytimes.com by Ben Bernanke and Janet Yellen; this is a gifted article

As former chairs of the Federal Reserve, we know from our experiences and our reading of history that the ability of the central bank to act independently is essential for its effective stewardship of the economy. Recent attempts to compromise that independence, including the president’s demands for a radical reduction in interest rates and his threats to fire its chair, Jerome Powell, if the Fed does not comply, risk lasting and serious economic harm. They undermine not only Mr. Powell but also all future chairs and, indeed, the credibility of the central bank itself.

Independence for the Federal Reserve to set interest rates does not imply a lack of democratic accountability. Congress has set in law the goals that the Fed must aim to achieve — maximum employment and stable prices — and Fed leaders report regularly to congressional committees on their progress toward those goals. Rather, independence means that monetary policymakers are permitted to use fact-based analysis and their best professional judgment in determining how best to reach their mandated goals, without regard to short-term political pressures.

Of course, Fed policymakers, being human, make mistakes. But an overwhelming amount of evidence, drawn from the experiences of both the United States and other countries, has shown that keeping politics out of monetary policy decisions leads to better economic outcomes.

 

Digital Accessibility in 2025: A Screen Reader User’s Honest Take — from blog.usablenet.com by Michael Taylor

In this post, part of the UsableNet 25th anniversary series, I’m taking a look at where things stand in 2025. I’ll discuss the areas that have improved—such as online shopping, banking, and social media—and the ones that still make it challenging to perform basic tasks, including travel, healthcare, and mobile apps. I hope that by sharing what works and what doesn’t, I can help paint a clearer picture of the digital world as it stands today.


Why EAA Compliance and Legal Trends Are Shaping Accessibility in 2025 — from blog.usablenet.com by Jason Taylor

On June 28, 2025, the European Accessibility Act (EAA) officially became enforceable across the European Union. This law requires digital products and services—including websites, mobile apps, e-commerce platforms, and software to meet the defined accessibility standards outlined in EN 301 549, which aligns with the WCAG 2.1 Level AA.

Companies that serve EU consumers must be able to demonstrate that accessibility is built into the design, development, testing, and maintenance of their digital products and services.

This milestone also arrives as UsableNet celebrates 25 years of accessibility leadership—a moment to reflect on how far we’ve come and what digital teams must do next.

 
© 2025 | Daniel Christian