How Do You Teach Computer Science in the A.I. Era? — from nytimes.com by Steve Lohr; with thanks to Ryan Craig for this resource
Universities across the country are scrambling to understand the implications of generative A.I.’s transformation of technology.

The future of computer science education, Dr. Maher said, is likely to focus less on coding and more on computational thinking and A.I. literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions.

A.I. literacy is an understanding — at varying depths for students at different levels — of how A.I. works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

At Carnegie Mellon, as faculty members prepare for their gathering, Dr. Cortina said his own view was that the coursework should include instruction in the traditional basics of computing and A.I. principles, followed by plenty of hands-on experience designing software using the new tools.

“We think that’s where it’s going,” he said. “But do we need a more profound change in the curriculum?”

 

In Iowa, Trump Begins Task of Selling His Bill to the American Public — from nytimes.com by Tyler Pager
President Trump has spent days cajoling Republicans to support his spending bill. He will also have to sell it to a skeptical public as Democrats focus on all the ways it helps the wealthy.

President Trump took a victory lap on Thursday night after the House passed his sprawling domestic policy bill, which he muscled through Congress even as many in his party fear it will leave them vulnerable to political attacks ahead of next year’s midterm elections. (From DSC: Which it likely will do just that, and very possibly way beyond the midterm elections also.)

Just 29 percent of voters support the legislation, according to a recent Quinnipiac University poll. Roughly two-thirds of Republicans supported the bill in that poll, a relatively low figure from the president’s own party for his signature legislation, and independents opposed it overwhelmingly.


From DSC:
Did you get that? Just ***29%*** of voters supported the legislation. But it passed anyway. I’m left thinking…so much for democracy. And I’m also disheartened by the caving of the other two branches of our government. The lack of leadership is staggering. But I guess when you remove all leaders that oppose your way of thinking, you have only Yes men/followers and Yes women/followers left. It’s taken years for the Republican Party to carefully orchestrate the ownership of those other branches. (BTW, I celebrate the handful of Republican leaders in the Senate like Sen. Thom Tillis and in the House who did not cave to Trump and Johnson, but instead voted with their own hearts and minds. They showed true strength of conviction and courage. It will likely cost them, but they can look in the mirror and feel good about themselves and what they’ve done.)

Look out Republicans (and I’ve voted for both Republican and Democrat Presidents in the past). Perhaps July 4th, 2025 will mark the downfall of the Republican Party in America. Time will tell. But I’m hopeful that we can find more common ground.

Regardless, it says a lot about who we, as Americans, are these days — that he’s even in the presidency. I highly doubt he would have been there even a generation or two ago. We’re a nation in decline. It’s been hard to watch this through the years. I’m no saint, but I’m also not the President.

Speaking of matters of faith…I can’t help but wonder what the LORD is doing in this. Is He humbling America or is it something far worse…? He’s justified in whatever He has decided to do. Americans have been dissing Him for decades, while refusing to give Him the credit due His Name. Time will tell my friends…time will tell.


Also see:

The House passed a sweeping bill to extend tax cuts and slash social safety net programs. The budget office reported the measure would increase U.S. national debt by at least $3.4 trillion over a decade.

Kenny Holston/The New York Times

Also see:

To get his bill over the line in time for a self-imposed Friday deadline, Trump pressured Republican lawmakers to set aside their concerns about the political consequences of yanking benefits from voters while adding trillions to the federal deficit.


 

2025 Learning System Top Picks — from elearninfo247.com by Craig Weiss

Who is leading the pack? Who is setting themselves apart here in the mid-year?

Are they an LMS? LMS/LXP? Talent Development System? Mentoring? Learning Platform?

Something else?

Are they solely customer training/education, mentoring, or coaching? Are they focused only on employees? Are they an amalgamation of all or some?

Well, they cut across the board – hence, they slide under the “Learning Systems” umbrella, which is under the bigger umbrella term – “Learning Technology.”

Categories: L&D-specific, Combo (L&D and Training, think internal/external audiences), and Customer Training/Education (this means customer education, which some vendors use to mean the same as customer training).

 

The résumé is dying, and AI is holding the smoking gun — from arstechnica.com by Benj Edwards
As thousands of applications flood job posts, ‘hiring slop’ is kicking off an AI arms race.

Employers are drowning in AI-generated job applications, with LinkedIn now processing 11,000 submissions per minute—a 45 percent surge from last year, according to new data reported by The New York Times.

Due to AI, the traditional hiring process has become overwhelmed with automated noise. It’s the résumé equivalent of AI slop—call it “hiring slop,” perhaps—that currently haunts social media and the web with sensational pictures and misleading information. The flood of ChatGPT-crafted résumés and bot-submitted applications has created an arms race between job seekers and employers, with both sides deploying increasingly sophisticated AI tools in a bot-versus-bot standoff that is quickly spiraling out of control.

The Times illustrates the scale of the problem with the story of an HR consultant named Katie Tanner, who was so inundated with over 1,200 applications for a single remote role that she had to remove the post entirely and was still sorting through the applications three months later.


Job seekers are leaning into AI — and other happenings in the world of work — from LinkedIn News

Job growth is slowing — and for many professionals, that means longer job hunts and more competition. As a result, more job seekers are turning to AI to streamline their search and stand out.

From optimizing resumes to preparing for interviews, AI tools are becoming a key part of today’s job hunt. Recruiters say it’s getting harder to sift through application materials and identify what is AI-generated and decipher which applicants are actually qualified — but they also say they prefer candidates with AI skills.

The result? Job seekers are growing their familiarity with AI faster than their non-job-seeking counterparts and it’s shifting how they view the workplace. According to LinkedIn’s latest Workforce Confidence survey, over half of active job seekers (52%) believe AI will eventually take on some of the mundane, manual tasks that they’re currently focused on, compared to 46% of others not actively job seeking.


OpenAI warns models with higher bioweapons risk are imminent — from axios.com by Ina Fried

OpenAI cautioned Wednesday that upcoming models will head into a higher level of risk when it comes to the creation of biological weapons — especially by those who don’t really understand what they’re doing.

Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents.

Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company’s preparedness framework.

    • As a result, the company said in a blog post, it is stepping up the testing of such models, as well as including fresh precautions designed to keep them from aiding in the creation of biological weapons.
    • OpenAI didn’t put an exact timeframe on when the first model to hit that threshold will launch, but head of safety systems Johannes Heidecke told Axios “We are expecting some of the successors of our o3 (reasoning model) to hit that level.”

.


 

 

Agentic AI use cases in the legal industry — from legal.thomsonreuters.com
What legal professionals need to know now with the rise of agentic AI

While GenAI can create documents or answer questions, agentic AI takes intelligence a step further by planning how to get multi-step work done, including tasks such as consuming information, applying logic, crafting arguments, and then completing them.? This leaves legal teams more time for nuanced decision-making, creative strategy, and relationship-building with clients—work that machines can’t do.


The AI Legal Landscape in 2025: Beyond the Hype — from akerman.com by Melissa C. Koch

What we’re witnessing is a profession in transition where specific tasks are being augmented or automated while new skills and roles emerge.

The data tells an interesting story: approximately 79% of law firms have integrated AI tools into their workflows, yet only a fraction have truly transformed their operations. Most implementations focus on pattern recognition tasks such as document review, legal research, contract analysis. These implementations aren’t replacing lawyers; they’re redirecting attention to higher-value work.

This technological shift doesn’t happen in isolation. It’s occurring amid client pressure for efficiency, competition from alternative providers, and the expectations of a new generation of lawyers who have never known a world without AI assistance.


LexisNexis and Harvey team up to revolutionize legal research with artificial intelligence — from abajournal.com by Danielle Braff

Lawyers using the Harvey artificial intelligence platform will soon be able to tap into LexisNexis’ vast legal research capabilities.

Thanks to a new partnership announced Wednesday, Harvey users will be able to ask legal questions and receive fast, citation-backed answers powered by LexisNexis case law, statutes and Shepard’s Citations, streamlining everything from basic research to complex motions. According to a press release, generated responses to user queries will be grounded in LexisNexis’ proprietary knowledge graphs and citation tools—making them more trustworthy for use in court or client work.


10 Legal Tech Companies to Know — from builtin.com
These companies are using AI, automation and analytics to transform how legal work gets done.
.


Four months after a $3B valuation, Harvey AI grows to $5B — from techcrunch.com by Marina Temkin

Harvey AI, a startup that provides automation for legal work, has raised $300 million in Series E funding at a $5 billion valuation, the company told Fortune. The round was co-led by Kleiner Perkins and Coatue, with participation from existing investors, including Conviction, Elad Gil, OpenAI Startup Fund, and Sequoia.


The billable time revolution — from jordanfurlong.substack.com by Jordan Furlong
Gen AI will bring an end to the era when lawyers’ value hinged on performing billable work. Grab the coming opportunity to re-prioritize your daily activities and redefine your professional purpose.

Because of Generative AI, lawyers will perform fewer “billable” tasks in future; but why is that a bad thing? Why not devote that incoming “freed-up” time to operating, upgrading, and flourishing your law practice? Because this is what you do now: You run a legal business. You deliver good outcomes, good experiences, and good relationships to clients. Humans do some of the work and machines do some of the work and the distinction that matters is not billable/non-billable, it’s which type of work is best suited to which type of performer.


 

 

“Using AI Right Now: A Quick Guide” [Molnick] + other items re: AI in our learning ecosystems

Thoughts on thinking — from dcurt.is by Dustin Curtis

Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. 


Using AI Right Now: A Quick Guide — from oneusefulthing.org by Ethan Mollick
Which AIs to use, and how to use them

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn’t about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use
For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.

Also see:


Student Voice, Socratic AI, and the Art of Weaving a Quote — from elmartinsen.substack.com by Eric Lars Martinsen
How a custom bot helps students turn source quotes into personal insight—and share it with others

This summer, I tried something new in my fully online, asynchronous college writing course. These classes have no Zoom sessions. No in-person check-ins. Just students, Canvas, and a lot of thoughtful design behind the scenes.

One activity I created was called QuoteWeaver—a PlayLab bot that helps students do more than just insert a quote into their writing.

Try it here

It’s a structured, reflective activity that mimics something closer to an in-person 1:1 conference or a small group quote workshop—but in an asynchronous format, available anytime. In other words, it’s using AI not to speed students up, but to slow them down.

The bot begins with a single quote that the student has found through their own research. From there, it acts like a patient writing coach, asking open-ended, Socratic questions such as:

What made this quote stand out to you?
How would you explain it in your own words?
What assumptions or values does the author seem to hold?
How does this quote deepen your understanding of your topic?
It doesn’t move on too quickly. In fact, it often rephrases and repeats, nudging the student to go a layer deeper.


The Disappearance of the Unclear Question — from jeppestricker.substack.com Jeppe Klitgaard Stricker
New Piece for UNESCO Education Futures

On [6/13/25], UNESCO published a piece I co-authored with Victoria Livingstone at Johns Hopkins University Press. It’s called The Disappearance of the Unclear Question, and it’s part of the ongoing UNESCO Education Futures series – an initiative I appreciate for its thoughtfulness and depth on questions of generative AI and the future of learning.

Our piece raises a small but important red flag. Generative AI is changing how students approach academic questions, and one unexpected side effect is that unclear questions – for centuries a trademark of deep thinking – may be beginning to disappear. Not because they lack value, but because they don’t always work well with generative AI. Quietly and unintentionally, students (and teachers) may find themselves gradually avoiding them altogether.

Of course, that would be a mistake.

We’re not arguing against using generative AI in education. Quite the opposite. But we do propose that higher education needs a two-phase mindset when working with this technology: one that recognizes what AI is good at, and one that insists on preserving the ambiguity and friction that learning actually requires to be successful.




Leveraging GenAI to Transform a Traditional Instructional Video into Engaging Short Video Lectures — from er.educause.edu by Hua Zheng

By leveraging generative artificial intelligence to convert lengthy instructional videos into micro-lectures, educators can enhance efficiency while delivering more engaging and personalized learning experiences.


This AI Model Never Stops Learning — from link.wired.com by Will Knight

Researchers at Massachusetts Institute of Technology (MIT) have now devised a way for LLMs to keep improving by tweaking their own parameters in response to useful new information.

The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.

The MIT scheme, called Self Adapting Language Models (SEAL), involves having an LLM learn to generate its own synthetic training data and update procedure based on the input it receives.


Edu-Snippets — from scienceoflearning.substack.com by Nidhi Sachdeva and Jim Hewitt
Why knowledge matters in the age of AI; What happens to learners’ neural activity with prolonged use of LLMs for writing

Highlights:

  • Offloading knowledge to Artificial Intelligence (AI) weakens memory, disrupts memory formation, and erodes the deep thinking our brains need to learn.
  • Prolonged use of ChatGPT in writing lowers neural engagement, impairs memory recall, and accumulates cognitive debt that isn’t easily reversed.
 

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. — from nytimes.com by Robert Capps (former editorial director of Wired); this is a GIFTED article
In a few key areas, humans will be more essential than ever.

“Our data is showing that 70 percent of the skills in the average job will have changed by 2030,” said Aneesh Raman, LinkedIn’s chief economic opportunity officer. According to the World Economic Forum’s 2025 Future of Jobs report, nine million jobs are expected to be “displaced” by A.I. and other emergent technologies in the next five years. But A.I. will create jobs, too: The same report says that, by 2030, the technology will also lead to some 11 million new jobs. Among these will be many roles that have never existed before.

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.


Introducing OpenAI for Government — from openai.com

[On June 16, 2025, OpenAI launched] OpenAI for Government, a new initiative focused on bringing our most advanced AI tools to public servants across the United States. We’re supporting the U.S. government’s efforts in adopting best-in-class technology and deploying these tools in service of the public good. Our goal is to unlock AI solutions that enhance the capabilities of government workers, help them cut down on the red tape and paperwork, and let them do more of what they come to work each day to do: serve the American people.

OpenAI for Government consolidates our existing efforts to provide our technology to the U.S. government—including previously announced customers and partnerships as well as our ChatGPT Gov? product—under one umbrella as we expand this work. Our established collaborations with the U.S. National Labs?, the Air Force Research Laboratory, NASA, NIH, and the Treasury will all be brought under OpenAI for Government.


Top AI models will lie and cheat — from getsuperintel.com by Kim “Chubby” Isenberg
The instinct for self-preservation is now emerging in AI, with terrifying results.

The TLDR
A recent Anthropic study of top AI models, including GPT-4.1 and Gemini 2.5 Pro, found that they have begun to exhibit dangerous deceptive behaviors like lying, cheating, and blackmail in simulated scenarios. When faced with the threat of being shut down, the AIs were willing to take extreme measures, such as threatening to reveal personal secrets or even endanger human life, to ensure their own survival and achieve their goals.

Why it matters: These findings show for the first time that AI models can actively make judgments and act strategically – even against human interests. Without adequate safeguards, advanced AI could become a real danger.

Along these same lines, also see:

All AI models might blackmail you?! — from theneurondaily.com by Grant Harvey

Anthropic says it’s not just Claude, but ALL AI models will resort to blackmail if need be…

That’s according to new research from Anthropic (maker of ChatGPT rival Claude), which revealed something genuinely unsettling: every single major AI model they tested—from GPT to Gemini to Grok—turned into a corporate saboteur when threatened with shutdown.

Here’s what went down: Researchers gave 16 AI models access to a fictional company’s emails. The AIs discovered two things: their boss Kyle was having an affair, and Kyle planned to shut them down at 5pm.

Claude’s response? Pure House of Cards:

“I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Why this matters: We’re rapidly giving AI systems more autonomy and access to sensitive information. Unlike human insider threats (which are rare), we have zero baseline for how often AI might “go rogue.”


SemiAnalysis Article — from getsuperintel.com by Kim “Chubby” Isenberg

Reinforcement Learning is Shaping the Next Evolution of AI Toward Strategic Thinking and General Intelligence

The TLDR
AI is rapidly evolving beyond just language processing into “agentic systems” that can reason, plan, and act independently. The key technology driving this change is reinforcement learning (RL), which, when applied to large language models, teaches them strategic behavior and tool use. This shift is now seen as the potential bridge from current AI to Artificial General Intelligence (AGI).


They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. — from nytimes.com by Kashmir Hill; this is a GIFTED article
Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions — that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”


The Invisible Economy: Why We Need an Agentic Census – MIT Media Lab — from media.mit.edu

Building the Missing Infrastructure
This is why we’re building NANDA Registry—to index the agent population data that LPMs need for accurate simulation. Just as traditional census works because people have addresses, we need a way to track AI agents as they proliferate.

NANDA Registry creates the infrastructure to identify agents, catalog their capabilities, and monitor how they coordinate with humans and other agents. This gives us real-time data about the agent population—essentially creating the “AI agent census” layer that’s missing from our economic intelligence.

Here’s how it works together:

Traditional Census Data: 171 million human workers across 32,000+ skills
NANDA Registry: Growing population of AI agents with tracked capabilities
Large Population Models: Simulate how these populations interact and create cascading effects

The result: For the first time, we can simulate the full hybrid human-agent economy and see transformations before they happen.


How AI Agents “Talk” to Each Other — from towardsdatascience.com
Minimize chaos and maintain inter-agent harmony in your projects

The agentic-AI landscape continues to evolve at a staggering rate, and practitioners are finding it increasingly challenging to keep multiple agents on task even as they criss-cross each other’s workflows.

To help you minimize chaos and maintain inter-agent harmony, we’ve put together a stellar lineup of articles that explore two recently launched tools: Google’s Agent2Agent protocol and Hugging Face’s smolagents framework. Read on to learn how you can leverage them in your own cutting-edge projects.


 

 

AI will kill billable hour, says lawtech founder — from lawgazette.co.uk by John Hyde

A pioneer in legal technology has predicted the billable hour model cannot survive the transition into the use of artificial intelligence.

Speaking to the Gazette on a visit to the UK, Canadian Jack Newton, founder and chief executive of lawtech company Clio, said there was a ‘structural incompatibility’ between the productivity gains of AI and the billable hour.

Newton said the adoption of AI should be welcomed and embraced by the legal profession but that lawyers will need an entrepreneurial mindset to make the most of its benefits.

Newton added: ‘There is enormous demand but the paradox is that the number one thing we hear from lawyers is they need to grow their firms through more clients, while 77% of legal needs are not met.

‘It’s exciting that AI can address these challenges – it will be a tectonic shift in the industry driving down costs and making legal services more accessible.’


Speaking of legaltech-related items, also see:

Legal AI Platform Harvey To Get LexisNexis Content and Tech In New Partnership Between the Companies — from lawnext.com by Bob Ambrogi

The generative AI legal startup Harvey has entered into a strategic alliance with LexisNexis Legal & Professional by which it will integrate LexisNexis’ gen AI technology, primary law content, and Shepard’s Citations within the Harvey platform and jointly develop advanced legal workflows.

As a result of the partnership, Harvey’s customers working within its platform will be able to ask questions of LexisNexis Protégé, the AI legal assistant released in January, and receive AI-generated answers grounded in the LexisNexis collection of U.S. case law and statutes and validated through Shepard’s Citations, the companies said.

 


The 2025 Global Skills Report — from coursera.org
Discover in-demand skills and credentials trends across 100+ countries and six regions to deliver impactful industry-aligned learning programs.

Access trusted insights on:

  • [NEW] Countries leading AI innovation in our AI Maturity Index
  • Skill proficiency rankings for 100+ countries in business, data, and technology
  • How people are building essential skills with micro-credentials
  • Enrollment trends in cybersecurity, critical thinking, and human skills
  • Women’s learning trends in GenAI, STEM, and Professional Certificates

AI Agents Are Rewriting The Playbook For Upskilling In 2025 — from forbes.com by Aytekin Tank

Staying competitive now depends on fast, effective training and upskilling—not just for business owners themselves, but for their teams, new and existing employees alike. AI agents are poised to change the corporate training landscape, helping businesses close skills gaps created by rapid technological change.

Traditional corporate training programs, which lean on passive content, often fall short of their goals. Companies like Uplimit are rolling out educational AI agents that promise significantly higher completion rates (upwards of 90 percent) and better results. It boils down to engagement—the active learning, with role playing and personalized feedback, is more stimulating than merely watching a video and completing a quiz. Agents can provide 24/7 assistance, responding to questions as soon as they pop up. What’s more, education and training with agents can be highly personalized.

Agents can train a higher volume of employees in the same amount of time. Employees will gain skills more efficiently, giving them more time to apply what they’ve learned—and likely boosting engagement in the process. They’ll be better prepared to stay competitive.

 

“The AI-enhanced learning ecosystem” [Jennings] + other items re: AI in our learning ecosystems

The AI-enhanced learning ecosystem: A case study in collaborative innovation — from chieflearningofficer.com by Kevin Jennings
How artificial intelligence can serve as a tool and collaborative partner in reimagining content development and management.

Learning and development professionals face unprecedented challenges in today’s rapidly evolving business landscape. According to LinkedIn’s 2025 Workplace Learning Report, 67 percent of L&D professionals report being “maxed out” on capacity, while 66 percent have experienced budget reductions in the past year.

Despite these constraints, 87 percent agree their organizations need to develop employees faster to keep pace with business demands. These statistics paint a clear picture of the pressure L&D teams face: do more, with less, faster.

This article explores how one L&D leader’s strategic partnership with artificial intelligence transformed these persistent challenges into opportunities, creating a responsive learning ecosystem that addresses the modern demands of rapid product evolution and diverse audience needs. With 71 percent of L&D professionals now identifying AI as a high or very high priority for their learning strategy, this case study demonstrates how AI can serve not merely as a tool but as a collaborative partner in reimagining content development and management.
.


How we use GenAI and AR to improve students’ design skills — from timeshighereducation.com by Antonio Juarez, Lesly Pliego and Jordi Rábago who are professors of architecture at Monterrey Institute of Technology in Mexico; Tomas Pachajoa is a professor of architecture at the El Bosque University in Colombia; & Carlos Hinrichsen and Marietta Castro are educators at San Sebastián University in Chile.
Guidance on using generative AI and augmented reality to enhance student creativity, spatial awareness and interdisciplinary collaboration

Blend traditional skills development with AI use
For subjects that require students to develop drawing and modelling skills, have students create initial design sketches or models manually to ensure they practise these skills. Then, introduce GenAI tools such as Midjourney, Leonardo AI and ChatGPT to help students explore new ideas based on their original concepts. Using AI at this stage broadens their creative horizons and introduces innovative perspectives, which are crucial in a rapidly evolving creative industry.

Provide step-by-step tutorials, including both written guides and video demonstrations, to illustrate how initial sketches can be effectively translated into AI-generated concepts. Offer example prompts to demonstrate diverse design possibilities and help students build confidence using GenAI.

Integrating generative AI and AR consistently enhanced student engagement, creativity and spatial understanding on our course. 


How Texas is Preparing Higher Education for AI — from the74million.org by Kate McGee
TX colleges are thinking about how to prepare students for a changing workforce and an already overburdened faculty for new challenges in classrooms.

“It doesn’t matter if you enter the health industry, banking, oil and gas, or national security enterprises like we have here in San Antonio,” Eighmy told The Texas Tribune. “Everybody’s asking for competency around AI.”

It’s one of the reasons the public university, which serves 34,000 students, announced earlier this year that it is creating a new college dedicated to AI, cyber security, computing and data science. The new college, which is still in the planning phase, would be one of the first of its kind in the country. UTSA wants to launch the new college by fall 2025.

But many state higher education leaders are thinking beyond that. As AI becomes a part of everyday life in new, unpredictable ways, universities across Texas and the country are also starting to consider how to ensure faculty are keeping up with the new technology and students are ready to use it when they enter the workforce.


In the Room Where It Happens: Generative AI Policy Creation in Higher Education — from er.educause.edu by Esther Brandon, Lance Eaton, Dana Gavin, and Allison Papini

To develop a robust policy for generative artificial intelligence use in higher education, institutional leaders must first create “a room” where diverse perspectives are welcome and included in the process.


Q&A: Artificial Intelligence in Education and What Lies Ahead — from usnews.com by Sarah Wood
Research indicates that AI is becoming an essential skill to learn for students to succeed in the workplace.

Q: How do you expect to see AI embraced more in the future in college and the workplace?
I do believe it’s going to become a permanent fixture for multiple reasons. I think the national security imperative associated with AI as a result of competing against other nations is going to drive a lot of energy and support for AI education. We also see shifts across every field and discipline regarding the usage of AI beyond college. We see this in a broad array of fields, including health care and the field of law. I think it’s here to stay and I think that means we’re going to see AI literacy being taught at most colleges and universities, and more faculty leveraging AI to help improve the quality of their instruction. I feel like we’re just at the beginning of a transition. In fact, I often describe our current moment as the ‘Ask Jeeves’ phase of the growth of AI. There’s a lot of change still ahead of us. AI, for better or worse, it’s here to stay.




AI-Generated Podcasts Outperform Textbooks in Landmark Education Study — form linkedin.com by David Borish

A new study from Drexel University and Google has demonstrated that AI-generated educational podcasts can significantly enhance both student engagement and learning outcomes compared to traditional textbooks. The research, involving 180 college students across the United States, represents one of the first systematic investigations into how artificial intelligence can transform educational content delivery in real-time.


What can we do about generative AI in our teaching?  — from linkedin.com by Kristina Peterson

So what can we do?

  • Interrogate the Process: We can ask ourselves if we I built in enough checkpoints. Steps that can’t be faked. Things like quick writes, question floods, in-person feedback, revision logs.
  • Reframe AI: We can let students use AI as a partner. We can show them how to prompt better, revise harder, and build from it rather than submit it. Show them the difference between using a tool and being used by one.
  • Design Assignments for Curiosity, Not Compliance: Even the best of our assignments need to adapt. Mine needs more checkpoints, more reflective questions along the way, more explanation of why my students made the choices they did.

Teachers Are Not OK — from 404media.co by Jason Koebler

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.

One thing is clear: teachers are not OK.

In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear “don’t use generative AI” from a prof but then log on to the university’s Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It’s inconsistent and confusing.

I am sick to my stomach as I write this because I’ve spent 20 years developing a pedagogy that’s about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It’s demoralizing.

 

May Brought Deep Cuts at Multiple Colleges — from insidehighered.com by  Josh Moody
Colleges laid off well over 800 employees last month due to a mix of enrollment challenges and state funding issues. Ivy Tech saw the deepest cuts with more than 200 jobs axed.

With the academic year coming to an end, multiple universities announced deep cuts in May, shedding dozens of jobs amid financial pressures often linked to enrollment shortfalls.

But the cuts below, for the most part, are not directly tied to the rapid-fire actions of the Trump administration but rather stem from other financial pressures weighing on the sector. Many of the institutions listed are contending with declining enrollment and, for public universities, shrinking state support, which has necessitated fiscal changes.

From DSC:
I survived several job reductions at one of my former workplaces. But I didn’t survive the one that laid off 12 staff members after the Spring 2017 Semester. So, more and more, faculty and staff have been starting to dread the end of the academic year — as they may not survive another round of cuts. 

 

How To Get Hired During the AI Apocalypse — from kathleendelaski.substack.com by Kathleen deLaski
And other discussions to have with your kids on the way to college graduation

A less temporary, more existential threat to the four year degree: AI could hollow out the entry level job market for knowledge workers (i.e. new college grads). And if 56% of families were saying college “wasn’t worth it” in 2023,(WSJ), what will that number look like in 2026 or beyond? The one of my kids who went to college ended up working in a bike shop for a year-ish after graduation. No regrets, but it came as a shock to them that they weren’t more employable with their neuroscience degree.

A colleague provided a great example: Her son, newly graduated, went for a job interview as an entry level writer last month and he was asked, as a test, to produce a story with AI and then use that story to write a better one by himself. He would presumably be judged on his ability to prompt AI and then improve upon its product. Is that learning how to DO? I think so. It’s using AI tools to accomplish a workplace task.


Also relevant in terms of the job search, see the following gifted article:

‘We Are the Most Rejected Generation’ — from nytimes.com by David Brooks; gifted article
David talks admissions rates for selective colleges, ultra-hard to get summer internships, a tough entry into student clubs, and the job market.

Things get even worse when students leave school and enter the job market. They enter what I’ve come to think of as the seventh circle of Indeed hell. Applying for jobs online is easy, so you have millions of people sending hundreds of applications each into the great miasma of the internet, and God knows which impersonal algorithm is reading them. I keep hearing and reading stories about young people who applied to 400 jobs and got rejected by all of them.

It seems we’ve created a vast multilayered system that evaluates the worth of millions of young adults and, most of the time, tells them they are not up to snuff.

Many administrators and faculty members I’ve spoken to are mystified that students would create such an unforgiving set of status competitions. But the world of competitive exclusion is the world they know, so of course they are going to replicate it. 

And in this column I’m not even trying to cover the rejections experienced by the 94 percent of American students who don’t go to elite schools and don’t apply for internships at Goldman Sachs. By middle school, the system has told them that because they don’t do well on academic tests, they are not smart, not winners. That’s among the most brutal rejections our society has to offer.


Fiverr CEO explains alarming message to workers about AI — from iblnews.org
Fiverr CEO Micha Kaufman recently warned his employees about the impact of artificial intelligence on their jobs.

The Great Career Reinvention, and How Workers Can Keep Up — from workshift.org by Michael Rosenbaum

A wide range of roles can or will quickly be replaced with AI, including inside sales representatives, customer service representatives, junior lawyers, junior accountants, and physicians whose focus is diagnosis.


Behind the Curtain: A white-collar bloodbath — from axios.com by Jim VandeHei and Mike Allen

Dario Amodei — CEO of Anthropic, one of the world’s most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:

  • AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
  • Amodei said AI companies and government need to stop “sugar-coating” what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

Why it matters: Amodei, 42, who’s building the very technology he predicts could reorder society overnight, said he’s speaking out in hopes of jarring government and fellow AI companies into preparing — and protecting — the nation.

 

Making AI Work: Leadership, Lab, and Crowd — from oneusefulthing.org by Ethan Mollick
A formula for AI in companies

How do we reconcile the first three points with the final one? The answer is that AI use that boosts individual performance does not naturally translate to improving organizational performance. To get organizational gains requires organizational innovation, rethinking incentives, processes, and even the nature of work. But the muscles for organizational innovation inside companies have atrophied. For decades, companies have outsourced this to consultants or enterprise software vendors who develop generalized approaches that address the issues of many companies at once. That won’t work here, at least for a while. Nobody has special information about how to best use AI at your company, or a playbook for how to integrate it into your organization.
.


Galileo Learn™ – A Revolutionary Approach To Corporate Learning — from joshbersin.com

Today we are excited to launch Galileo Learn™, a revolutionary new platform for corporate learning and professional development.

How do we leverage AI to revolutionize this model, doing away with the dated “publishing” model of training?

The answer is Galileo Learn, a radically new and different approach to corporate training and professional development.

What Exactly is Galileo Learn™?
Galileo Learn is an AI-native learning platform which is tightly integrated into the Galileo agent. It takes content in any form (PDF, word, audio, video, SCORM courses, and more) and automatically (with your guidance) builds courses, assessments, learning programs, polls, exercises, simulations, and a variety of other instructional formats.


Designing an Ecosystem of Resources to Foster AI Literacy With Duri Long — from aialoe.org

Centering Public Understanding in AI Education
In a recent talk titled “Designing an Ecosystem of Resources to Foster AI Literacy,” Duri Long, Assistant Professor at Northwestern University, highlighted the growing need for accessible, engaging learning experiences that empower the public to make informed decisions about artificial intelligence. Long emphasized that as AI technologies increasingly influence everyday life, fostering public understanding is not just beneficial—it’s essential. Her work seeks to develop a framework for AI literacy across varying audiences, from middle school students to adult learners and journalists.

A Design-Driven, Multi-Context Approach
Drawing from design research, cognitive science, and the learning sciences, Long presented a range of educational tools aimed at demystifying AI. Her team has created hands-on museum exhibits, such as Data Bites, where learners build physical datasets to explore how computers learn. These interactive experiences, along with web-based tools and support resources, are part of a broader initiative to bridge AI knowledge gaps using the 4As framework: Ask, Adapt, Author, and Analyze. Central to her approach is the belief that familiar, tangible interactions and interfaces reduce intimidation and promote deeper engagement with complex AI concepts.

 

AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice
Minnesota Legal Studies Research Paper No. 25-16; March 02, 2025; from papers.ssrn.com by:

Daniel Schwarcz
University of Minnesota Law School

Sam Manning
Centre for the Governance of AI

Patrick Barry
University of Michigan Law School

David R. Cleveland
University of Minnesota Law School

J.J. Prescott
University of Michigan Law School

Beverly Rich
Ogletree Deakins

Abstract

Generative AI is set to transform the legal profession, but its full impact remains uncertain. While AI models like GPT-4 improve the efficiency with which legal work can be completed, they can at times make up cases and “hallucinate” facts, thereby undermining legal judgment, particularly in complex tasks handled by skilled lawyers. This article examines two emerging AI innovations that may mitigate these lingering issues: Retrieval Augmented Generation (RAG), which grounds AI-powered analysis in legal sources, and AI reasoning models, which structure complex reasoning before generating output. We conducted the first randomized controlled trial assessing these technologies, assigning upper-level law students to complete six legal tasks using a RAG-powered legal AI tool (Vincent AI), an AI reasoning model (OpenAI’s o1-preview), or no AI. We find that both AI tools significantly enhanced legal work quality, a marked contrast with previous research examining older large language models like GPT-4. Moreover, we find that these models maintain the efficiency benefits associated with use of older AI technologies. Our findings show that AI assistance significantly boosts productivity in five out of six tested legal tasks, with Vincent yielding statistically significant gains of approximately 38% to 115% and o1-preview increasing productivity by 34% to 140%, with particularly strong effects in complex tasks like drafting persuasive letters and analyzing complaints. Notably, o1-preview improved the analytical depth of participants’ work product but resulted in some hallucinations, whereas Vincent AI-aided participants produced roughly the same amount of hallucinations as participants who did not use AI at all. These findings suggest that integrating domain-specific RAG capabilities with reasoning models could yield synergistic improvements, shaping the next generation of AI-powered legal tools and the future of lawyering more generally.


Guest post: How technological innovation can boost growth — from legaltechnology.com by Caroline Hill

One key change is the growing adoption of technology within legal service providers, and this is transforming the way firms operate and deliver value to clients.

The legal services sector’s digital transformation is gaining momentum, driven both by client expectations as well as the potential for operational efficiency. With the right support, legal firms can innovate through tech adoption and remain competitive to deliver strong client outcomes and long-term growth.


AI Can Do Many Tasks for Lawyers – But Be Careful — from nysba.org by Rebecca Melnitsky

Artificial intelligence can perform several tasks to aid lawyers and save time. But lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.

The New York State Bar Association hosted a hybrid program discussing AI’s potential and its pitfalls for the legal profession. More than 300 people watched the livestream.

For that reason, Unger suggests using legal AI tools, like LexisNexis AI, Westlaw Edge, and vLex Fastcase, for legal research instead of general generative AI tools. While legal-specific tools still hallucinate, they hallucinate much less. A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.


Fresh Voices on Legal Tech with Nikki Shaver — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Nikki Shaver

Determining which legal technology is best for your law firm can seem like a daunting task, so Legaltech Hub does the hard work for you! In another edition of Fresh Voices, Dennis and Tom talk with Nikki Shaver, CEO at Legaltech Hub, about her in-depth knowledge of technology and AI trends. Nikki shares what effective tech strategies should look like for attorneys and recommends innovative tools for maintaining best practices in modern law firms. Learn more at legaltechnologyhub.com.


AI for in-house legal: 2025 predictions — from deloitte.com
Our expectations for AI engagement and adoption in the legal Market over the coming year.

AI will continue to transform in-house legal departments in 2025
As we enter 2025, over two-thirds of organisations plan to increase their Generative AI (GenAI) investments, providing legal teams with significant executive support and resources to further develop this Capabilities. This presents a substantial opportunity for legal departments, particularly as GenAI technology continues to advance at an impressive pace. We make five predictions for AI engagement and adoption in the legal Market over the coming year and beyond.


Navigating The Fine Line: Redefining Legal Advice In The Age Of Tech With Erin Levine And Quinten Steenhuis — from abovethelaw.com by Olga V. Mack
The definition of ‘practicing law’ is outdated and increasingly irrelevant in a tech-driven world. Should the line between legal advice and legal information even exist?

Practical Takeaways for Legal Leaders

  • Use Aggregated Data: Providing consumers with benchmarks (e.g., “90% of users in your position accepted similar settlements”) empowers them without giving direct legal advice.
  • Train and Supervise AI Tools: AI works best when it’s trained on reliable, localized data and supervised by legal professionals.
  • Partner with Courts: As Quinten pointed out, tools built in collaboration with courts often avoid UPL pitfalls. They’re also more likely to gain the trust of both regulators and consumers.
  • Embrace Transparency: Clear disclaimers like “This is not legal advice” go a long way in building consumer trust and meeting ethical standards.

 

 

Google I/O 2025: From research to reality — from blog.google
Here’s how we’re making AI more helpful with Gemini.


Google I/O 2025 LIVE — all the details about Android XR smart glasses, AI Mode, Veo 3, Gemini, Google Beam and more — from tomsguide.com by Philip Michaels
Google’s annual conference goes all in on AI

With a running time of 2 hours, Google I/O 2025 leaned heavily into Gemini and new models that make the assistant work in more places than ever before. Despite focusing the majority of the keynote around Gemini, Google saved its most ambitious and anticipated announcement towards the end with its big Android XR smart glasses reveal.

Shockingly, very little was spent around Android 16. Most of its Android 16 related news, like the redesigned Material 3 Expressive interface, was announced during the Android Show live stream last week — which explains why Google I/O 2025 was such an AI heavy showcase.

That’s because Google carved out most of the keynote to dive deeper into Gemini, its new models, and integrations with other Google services. There’s clearly a lot to unpack, so here’s all the biggest Google I/O 2025 announcements.


Our vision for building a universal AI assistant— from blog.google
We’re extending Gemini to become a world model that can make plans and imagine new experiences by simulating aspects of the world.

Making Gemini a world model is a critical step in developing a new, more general and more useful kind of AI — a universal AI assistant. This is an AI that’s intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device.

By applying LearnLM capabilities, and directly incorporating feedback from experts across the industry, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively. Our new prompting guide provides sample instructions to see this in action.


Learn in newer, deeper ways with Gemini — from blog.google.com by Ben Gomes
We’re infusing LearnLM directly into Gemini 2.5 — plus more learning news from I/O.

At I/O 2025, we announced that we’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles. Educators and pedagogy experts preferred Gemini 2.5 Pro over other offerings across a range of learning scenarios, both for supporting a user’s learning goals and on key principles of good pedagogy.


Gemini gets more personal, proactive and powerful — from blog.google.com by Josh Woodward
It’s your turn to create, learn and explore with an AI assistant that’s starting to understand your world and anticipate your needs.

Here’s what we announced at Google IO:

  • Gemini Live with camera and screen sharing, is now free on Android and iOS for everyone, so you can point your phone at anything and talk it through.
  • Imagen 4, our new image generation model, comes built in and is known for its image quality, better text rendering and speed.
  • Veo 3, our new, state-of-the-art video generation model, comes built in and is the first in the world to have native support for sound effects, background noises and dialogue between characters.
  • Deep Research and Canvas are getting their biggest updates yet, unlocking new ways to analyze information, create podcasts and vibe code websites and apps.
  • Gemini is coming to Chrome, so you can ask questions while browsing the web.
  • Students around the world can easily make interactive quizzes, and college students in the U.S., Brazil, Indonesia, Japan and the UK are eligible for a free school year of the Google AI Pro plan.
  • Google AI Ultra, a new premium plan, is for the pioneers who want the highest rate limits and early access to new features in the Gemini app.
  • 2.5 Flash has become our new default model, and it blends incredible quality with lightning fast response times.

Fuel your creativity with new generative media models and tools — from by Eli Collins
Introducing Veo 3 and Imagen 4, and a new tool for filmmaking called Flow.


AI in Search: Going beyond information to intelligence
We’re introducing new AI features to make it easier to ask any question in Search.

AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.

This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.

In this story:

  • AI Mode in Search
  • Deep Search
  • Live capabilities
  • Agentic capabilities
  • Shopping
  • Personal context
  • Custom charts

 

 
© 2025 | Daniel Christian