Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.

 

AI in Education Survey: What UK and US Educators Think in 2025 — from twinkl.com
As artificial intelligence (AI) continues to shape the world around us, Twinkl conducted a large-scale survey between January 15th and January 22nd to explore its impact on the education sector, as well as the work lives of teachers across the UK and the USA.

Teachers’ use of AI for work continues to rise
Twinkl’s survey asked teachers whether they were currently using AI for work purposes. Comparing these findings to similar surveys over recent years shows the use of AI tools by teachers has seen a significant increase across both the UK and USA.

  • According to two UK surveys by the National Literacy Trust – 30% of teachers used generative AI in 2023 and nearly half (47.7%) in 2024. Twinkl’s survey indicates that AI adoption continues to rise rapidly, with 60% of UK educators currently integrating it into their work lives in 2025.
  • Similarly, with 62% of US teachers currently using AI for work, uptake appears to have risen greatly in the past 12 months, with just 25% saying they were leveraging the new technology in the 2023-24 school year according to a RAND report.
  • Teachers are using AI more for work than in their personal lives: In the UK, personal usage drops to 43% (from 60% at school).  In the US, 52% are using AI for non-work purposes (versus 62% in education settings).

    60% of UK teachers and 62% of US teachers use AI in their work life in 2025.

 




Students and folks looking for work may want to check out:

Also relevant/see:


 

From DSC:
Look out Google, Amazon, and others! Nvidia is putting the pedal to the metal in terms of being innovative and visionary! They are leaving the likes of Apple in the dust.

The top talent out there is likely to go to Nvidia for a while. Engineers, programmers/software architects, network architects, product designers, data specialists, AI researchers, developers of robotics and autonomous vehicles, R&D specialists, computer vision specialists, natural language processing experts, and many more types of positions will be flocking to Nvidia to work for a company that has already changed the world and will likely continue to do so for years to come. 



NVIDIA’s AI Superbowl — from theneurondaily.com by Noah and Grant
PLUS: Prompt tips to make AI writing more natural

That’s despite a flood of new announcements (here’s a 16 min video recap), which included:

  1. A new architecture for massive AI data centers (now called “AI factories”).
  2. A physics engine for robot training built with Disney and DeepMind.
  3. partnership with GM to develop next-gen vehicles, factories and robots.
  4. A new Blackwell chip with “Dynamo” software that makes AI reasoning 40x faster than previous generations.
  5. A new “Rubin” chip slated for 2026 and a “Feynman” chip set for 2028.

For enterprises, NVIDIA unveiled DGX Spark and DGX Station—Jensen’s vision of AI-era computing, bringing NVIDIA’s powerful Blackwell chip directly to your desk.


Nvidia Bets Big on Synthetic Data — from wired.com by Lauren Goode
Nvidia has acquired synthetic data startup Gretel to bolster the AI training data used by the chip maker’s customers and developers.


Nvidia, xAI to Join BlackRock and Microsoft’s $30 Billion AI Infrastructure Fund — from investopedia.com by Aaron McDade
Nvidia and xAI are joining BlackRock and Microsoft in an AI infrastructure group seeking $30 billion in funding. The group was first announced in September as BlackRock and Microsoft sought to fund new data centers to power AI products.



Nvidia CEO Jensen Huang says we’ll soon see 1 million GPU data centers visible from space — from finance.yahoo.com by Daniel Howley
Nvidia CEO Jensen Huang says the company is preparing for 1 million GPU data centers.


Nvidia stock stems losses as GTC leaves Wall Street analysts ‘comfortable with long term AI demand’ — from finance.yahoo.com by Laura Bratton
Nvidia stock reversed direction after a two-day slide that saw shares lose 5% as the AI chipmaker’s annual GTC event failed to excite investors amid a broader market downturn.


Microsoft, Google, and Oracle Deepen Nvidia Partnerships. This Stock Got the Biggest GTC Boost. — from barrons.com by Adam Clark and Elsa Ohlen


The 4 Big Surprises from Nvidia’s ‘Super Bowl of AI’ GTC Keynote — from barrons.com by Tae Kim; behind a paywall

AI Super Bowl. Hi everyone. This week, 20,000 engineers, scientists, industry executives, and yours truly descended upon San Jose, Calif. for Nvidia’s annual GTC developers’ conference, which has been dubbed the “Super Bowl of AI.”


 

Blind Spot on AI — from the-job.beehiiv.com by Paul Fain
Office tasks are being automated now, but nobody has answers on how education and worker upskilling should change.

Students and workers will need help adjusting to a labor market that appears to be on the verge of a historic disruption as many business processes are automated. Yet job projections and policy ideas are sorely lacking.

The benefits of agentic AI are already clear for a wide range of organizations, including small nonprofits like CareerVillage. But the ability to automate a broad range of business processes means that education programs and skills training for knowledge workers will need to change. And as Chung writes in a must-read essay, we have a blind spot with predicting the impacts of agentic AI on the labor market.

“Without robust projections,” he writes, “policymakers, businesses, and educators won’t be able to come to terms with how rapidly we need to start this upskilling.”

 

Eight Legal Tech Trends Set To Impact Law Firms In 2025 — from forbes.com by Daniel Farrar

Trends To Watch This Year

1. A Focus On Client Experience And Technology-Driven Client Services
2. Evolution Of Pricing Models In Legal Services
3. Cloud Computing, Remote Work, Globalization And Cross-Border Legal Services
4. Legal Analytics And Data-Driven Decision Making
5. Automation Of Routine Legal Tasks
6. Integration Of Artificial Intelligence
7. AI In Mergers And Acquisitions
8. Cybersecurity And Data Privacy


The Future of Legal Tech Jobs: Trends, Opportunities, and Skills for 2025 and Beyond — from jdjournal.com by Maria Lenin Laus

This guide explores the top legal tech jobs in demand, key skills for success, hiring trends, and future predictionsfor the legal industry. Whether you’re a lawyer, law student, IT professional, or business leader, this article will help you navigate the shifting terrain of legal tech careers.

Top Legal Tech Hiring Trends for 2025

1. Law Firms Are Prioritizing Tech Skills
Over 65% of law firms are hiring legal tech experts over traditional attorneys.
AI implementation, automation, and analytics skills are now must-haves.
2. In-House Legal Teams Are Expanding Legal Tech Roles
77% of corporate legal teams say tech expertise is now mandatory.
More companies are investing in contract automation and legal AI tools.
3. Law Schools Are Adding Legal Tech Courses
Institutions like Harvard and Stanford now offer AI and legal tech curriculums.
Graduates with legal tech skills gain a competitive advantage.


Legal tech predictions for 2025: What’s next in legal innovation? — from jdsupra.com

  1. Collaboration tools reshape communication and documentation
  2. From chatbots to ‘AI agents’: The next evolution
  3. Governance AI frameworks take center stage
  4. Local governments drive AI accountability
  5. Continuous growing legal fees and ROI become a primary focus

Meet Ivo, The Legal AI That Will Review Your Contracts — from forbes.com by David Prosser

Contract reviews and negotiations are the bread-and-butter work of many corporate lawyers, but artificial intelligence (AI) promises to transform every aspect of the legal profession. Legaltech start-up Ivo, which is today announcing a $16 million Series A funding round, wants to make manual contract work a thing of the past.

“We help in-house legal teams to red-line and negotiate contract agreements more quickly and easily,” explains Min-Kyu Jung, CEO and co-founder of Ivo. “It’s a challenge that couldn’t be solved well by AI until relatively recently, but the evolution of generative AI has made it possible.”


A&O Shearman, Cooley Leading Legal Tech Investment at Law Firms — from news.bloomberglaw.com by Evan Ochsner

  • Leading firms are investing their own resources in legal tech
  • Firms seek to tailor tech development to specific functions
 

DeepSeek: How China’s AI Breakthrough Could Revolutionize Educational Technology — from nickpotkalitsky.substack.com by Nick Potkalitsky
Can DeepSeek’s 90% efficiency boost make AI accessible to every school?

The most revolutionary aspect of DeepSeek for education isn’t just its cost—it’s the combination of open-source accessibility and local deployment capabilities. As Azeem Azhar notes, “R-1 is open-source. Anyone can download and run it on their own hardware. I have R1-8b (the second smallest model) running on my Mac Mini at home.”

Real-time Learning Enhancement

  • AI tutoring networks that collaborate to optimize individual learning paths
  • Immediate, multi-perspective feedback on student work
  • Continuous assessment and curriculum adaptation

The question isn’t whether this technology will transform education—it’s how quickly institutions can adapt to a world where advanced AI capabilities are finally within reach of every classroom.


Over 100 AI Tools for Teachers — from educatorstechnology.com by Med Kharbach, PhD

I know through your feedback on my social media and blog posts that several of you have legitimate concerns about the impact of AI in education, especially those related to data privacy, academic dishonesty, AI dependence, loss of creativity and critical thinking, plagiarism, to mention a few. While these concerns are valid and deserve careful consideration, it’s also important to explore the potential benefits AI can bring when used thoughtfully.

Tools such as ChatGPT and Claude are like smart research assistants that are available 24/7 to support you with all kinds of tasks from drafting detailed lesson plans, creating differentiated materials, generating classroom activities, to summarizing and simplifying complex topics. Likewise, students can use them to enhance their learning by, for instance, brainstorming ideas for research projects, generating constructive feedback on assignments, practicing problem-solving in a guided way, and much more.

The point here is that AI is here to stay and expand, and we better learn how to use it thoughtfully and responsibly rather than avoid it out of fear or skepticism.


Beth’s posting links to:

 


Derek’s posting on LinkedIn


From Theory to Practice: How Generative AI is Redefining Instructional Materials — from edtechinsiders.substack.com by Alex Sarlin
Top trends and insights from The Edtech Insiders Generative AI Map research process about how Generative AI is transforming Instructional Materials

As part of our updates to the Edtech Insiders Generative AI Map, we’re excited to release a new mini market map and article deep dive on Generative AI tools that are specifically designed for Instructional Materials use cases.

In our database, the Instructional Materials use case category encompasses tools that:

  • Assist educators by streamlining lesson planning, curriculum development, and content customization
  • Enable educators or students to transform materials into alternative formats, such as videos, podcasts, or other interactive media, in addition to leveraging gaming principles or immersive VR to enhance engagement
  • Empower educators or students to transform text, video, slides or other source material into study aids like study guides, flashcards, practice tests, or graphic organizers
  • Engage students through interactive lessons featuring historical figures, authors, or fictional characters
  • Customize curriculum to individual needs or pedagogical approaches
  • Empower educators or students to quickly create online learning assets and courses

On a somewhat-related note, also see:


 

DeepSeek hits the scene — MUCH too early to say how this open-source platform will play out here in the United States. Things are tense between the U.S. and Chian.

10 WILD Deepseek demos — from theneurondaily.com

Over the last week, pretty much everyone in the AI space has been losing their minds over Deepseek R1. The open source community has been loving it, the closed source tech giants have been less than loving it, and even the mainstream media is starting to pick up on how last week’s R1 launch was a big deal

We’ve been trying to understand just how powerful R1 really is, so we rounded up everything we could find that shows off just what this little AI side project can do.

Here’s some WILD demos of what people have done with Deepseek R1 so far:



Is DeepSeek the new DeepMind? — from ai-supremacy.com by Michael Spencer
AI supremacy isn’t just about compute or U.S. leadership, it’s about how you work to make models more efficient and improve their accessibility for everyone.

Over the last week especially but over the last month generally, the AI Zeitgeist is flooding with what DeepSeek’s R1 means for the larger ecosystem and the future of AI as a whole. See some articles I’m reading on DeepSeek here (Google Doc).

It’s an important moment in so far as everything from export controls to AI Infrastructure, to capex spend or AI talent moats are being put into question.



 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

Where to start with AI agents: An introduction for COOs — from fortune.com by Ganesh Ayyar

Picture your enterprise as a living ecosystem, where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.

Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?

The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.


Create podcasts in minutes — from elevenlabs.io by Eleven Labs
Now anyone can be a podcast producer


Top AI tools for business — from theneuron.ai


This week in AI: 3D from images, video tools, and more — from heatherbcooper.substack.com by Heather Cooper
From 3D worlds to consistent characters, explore this week’s AI trends

Another busy AI news week, so I organized it into categories:

  • Image to 3D
  • AI Video
  • AI Image Models & Tools
  • AI Assistants / LLMs
  • AI Creative Workflow: Luma AI Boards

Want to speak Italian? Microsoft AI can make it sound like you do. — this is a gifted article from The Washington Post;
A new AI-powered interpreter is expected to simulate speakers’ voices in different languages during Microsoft Teams meetings.

Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.


What Is Agentic AI?  — from blogs.nvidia.com by Erik Pounds
Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.

The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.

Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.


 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

How to use NotebookLM for personalized knowledge synthesis — from ai-supremacy.com by Michael Spencer and Alex McFarland
Two powerful workflows that unlock everything else. Intro: Golden Age of AI Tools and AI agent frameworks begins in 2025.

What is Google Learn about?
Google’s new AI tool, Learn About, is designed as a conversational learning companion that adapts to individual learning needs and curiosity. It allows users to explore various topics by entering questions, uploading images or documents, or selecting from curated topics. The tool aims to provide personalized responses tailored to the user’s knowledge level, making it user-friendly and engaging for learners of all ages.

Is Generative AI leading to a new take on Educational technology? It certainly appears promising heading into 2025.

The Learn About tool utilizes the LearnLM AI model, which is grounded in educational research and focuses on how people learn. Google insists that unlike traditional chatbots, it emphasizes interactive and visual elements in its responses, enhancing the educational experience. For instance, when asked about complex topics like the size of the universe, Learn About not only provides factual information but also includes related content, vocabulary building tools, and contextual explanations to deepen understanding.

 

Five key issues to consider when adopting an AI-based legal tech — from legalfutures.co.uk by Mark Hughes

As more of our familiar legal resources have started to embrace a generative AI overhaul, and new players have come to the market, there are some key issues that your law firm needs to consider when adopting an AI-based legal tech.

  • Licensing
  • Data protection
  • The data sets
  • …and others

Knowable Introduces Gen AI Tool It Says Will Revolutionize How Companies Interact with their Contracts — from lawnext.com by Bob Ambrogi

Knowable, a legal technology company specializing in helping organizations bring order and organization to their executed agreements, has announced Ask Knowable, a suite of generative AI-powered tools aimed at transforming how legal teams interact with and understand what is in their contracts.

Released today as a commercial preview and set to launch for general availability in March 2025, the feature marks a significant step forward in leveraging large language models to address the complexities of contract management, the company says.


The Global Legal Post teams up with LexisNexis to explore challenges and opportunities of Gen AI adoption — from globallegalpost.com by
Series of articles will investigate key criteria to consider when investing in Gen AI

The Global Legal Post has teamed up with LexisNexis to help inform readers’ decision-making in the selection of generative AI (Gen AI) legal research solutions.

The Generative AI Legal Research Hub in association with LexisNexis will host a series of articles exploring the key criteria law firms and legal departments should consider when seeking to harness the power of Gen AI to improve the delivery of legal services.


Leveraging AI to Grow Your Legal Practice — from americanbar.org

Summary

  • AI-powered tools like chat and scheduling meet clients’ demand for instant, personalized service, improving engagement and satisfaction.
  • Firms using AI see up to a 30% increase in lead conversion, cutting client acquisition costs and maximizing marketing investments.
  • AI streamlines processes, speeds up response times, and enhances client engagement—driving growth and long-term client retention.

How a tech GC views AI-enabled efficiencies and regulation — from legaldive.com by Justin Bachman
PagerDuty’s top in-house counsel sees legal AI tools as a way to scale resources without adding headcount while focusing lawyers on their high-value work.


Innovations in Legal Practice: How Tim Billick’s Firm Stays Ahead with AI and Technology — from techtimes.com by Elena McCormick

Enhancing Client Service through Technology
Beyond internal efficiency, Billick’s firm utilizes technology to improve client communication and engagement. By adopting client-facing AI tools, such as chatbots for routine inquiries and client portals for real-time updates, Practus makes legal processes more transparent and accessible to its clients. According to Billick, this responsiveness is essential in IP law, where clients often need quick updates and answers to time-sensitive questions about patents, trademarks, and licensing agreements.

AI-driven client management software is also part of the firm’s toolkit, enabling Billick and his team to track each client’s case progress and share updates efficiently. The firm’s technology infrastructure supports clients from various sectors, including engineering, software development, and consumer products, tailoring case workflows to meet unique needs within each industry. “Clients appreciate having immediate access to their case status, especially in industries where timing is crucial,” Billick shares.


New Generative AI Study Highlights Adoption, Use and Opportunities in the Legal Industry — from prnewswire.com by Relativity

CHICAGO, Nov. 12, 2024 /PRNewswire/ — Relativity, a global legal technology company, today announced findings from the IDC InfoBrief, Generative AI in Legal 2024, commissioned by Relativity. The study uncovers the rapid increase of generative AI adoption in the legal field, examining how legal professionals are navigating emerging challenges and seizing opportunities to drive legal innovation.

The international study surveyed attorneys, paralegals, legal operations professionals and legal IT professionals from law firms, corporations and government agencies. Respondents were located in Australia, Canada, Ireland, New Zealand, the United Kingdom and the United States. The data uncovered important trends on how generative AI has impacted the legal industry and how legal professionals will use generative AI in the coming years.

 

A Code-Red Leadership Crisis: A Wake-Up Call for Talent Development — from learningguild.com by Dr. Arika Pierce Williams

This company’s experience offers three crucial lessons for other organizational leaders who may be contemplating cutting or reducing talent development investments in their 2025 budgets to focus on “growth.”

  1. Leadership development isn’t a luxury – it’s a strategic imperative…
  2. Succession planning must be an ongoing process, not a reactive measure…
  3. The cost of developing leaders is far less than the cost of not having them when you need them most…

Also from The Learning Guild, see:

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

  1. AI-driven course design
  2. Hyper-personalized content curation
  3. Immersive scenario-based training
  4. Smart chatbots
  5. Wearable devices
 
© 2025 | Daniel Christian