2025: The Year the Frontier Firm Is Born — from Microsoft

We are entering a new reality—one in which AI can reason and solve problems in remarkable ways. This intelligence on tap will rewrite the rules of business and transform knowledge work as we know it. Organizations today must navigate the challenge of preparing for an AI-enhanced future, where AI agents will gain increasing levels of capability over time that humans will need to harness as they redesign their business. Human ambition, creativity, and ingenuity will continue to create new economic value and opportunity as we redefine work and workflows.

As a result, a new organizational blueprint is emerging, one that blends machine intelligence with human judgment, building systems that are AI-operated but human-led. Like the Industrial Revolution and the internet era, this transformation will take decades to reach its full promise and involve broad technological, societal, and economic change.

To help leaders understand how knowledge work will evolve, Microsoft analyzed survey data from 31,000 workers across 31 countries, LinkedIn labor market trends, and trillions of Microsoft 365 productivity signals. We also spoke with AI-native startups, academics, economists, scientists, and thought leaders to explore what work could become. The data and insights point to the emergence of an entirely new organization, a Frontier Firm that looks markedly different from those we know today. Structured around on-demand intelligence and powered by “hybrid” teams of humans + agents, these companies scale rapidly, operate with agility, and generate value faster.

Frontier Firms are already taking shape, and within the next 2–5 years we expect that every organization will be on their journey to becoming one. 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. Adoption is accelerating: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.

The time to act is now. The question for every leader and employee is: how will you adapt?


On a somewhat related note, also see:

Exclusive: Anthropic warns fully AI employees are a year away — from axios.com by Sam Sabin

Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.

Why it matters: Managing those AI identities will require companies to reassess their cybersecurity strategies or risk exposing their networks to major security breaches.

The big picture: Virtual employees could be the next AI innovation hotbed, Jason Clinton, the company’s chief information security officer, told Axios.

 

Values in the wild: Discovering and analyzing values in real-world language model interactions — from anthropic.com

In the latest research paper from Anthropic’s Societal Impacts team, we describe a practical way we’ve developed to observe Claude’s values—and provide the first large-scale results on how Claude expresses those values during real-world conversations. We also provide an open dataset for researchers to run further analysis of the values and how often they arise in conversations.

Per the Rundown AI

Why it matters: AI is increasingly shaping real-world decisions and relationships, making understanding their actual values more crucial than ever. This study also moves the alignment discussion toward more concrete observations, revealing that AI’s morals and values may be more contextual and situational than a static point of view.

Also from Anthropic, see:

Anthropic Education Report: How University Students Use Claude


Adobe Firefly: The next evolution of creative AI is here — from blog.adobe.com

In just under two years, Adobe Firefly has revolutionized the creative industry and generated more than 22 billion assets worldwide. Today at Adobe MAX London, we’re unveiling the latest release of Firefly, which unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities.

The new Firefly features enhanced models, improved ideation capabilities, expanded creative options, and unprecedented control. This update builds on earlier momentum when we introduced the Firefly web app and expanded into video and audio with Generate Video, Translate Video, and Translate Audio features.

Per The Rundown AI (here):

Why it matters: OpenAI’s recent image generator and other rivals have shaken up creative workflows, but Adobe’s IP-safe focus and the addition of competing models into Firefly allow professionals to remain in their established suite of tools — keeping users in the ecosystem while still having flexibility for other model strengths.

 

4 ways community colleges can boost workforce development — from highereddive.com by Natalie Schwartz
Higher education leaders at this week’s ASU+GSV Summit gave advice for how two-year institutions can boost the economic mobility of their students.

SAN DIEGO — How can community colleges deliver economic mobility to their students?

College leaders at this week’s ASU+GSV Summit, an annual education and technology conference, got a glimpse into that answer as they heard how community colleges are building support from business and industry and strengthening workforce development.

These types of initiatives may be helping to boost public perception of the value of community colleges vs. four-year institutions.

 

How People Are Really Using Gen AI in 2025 — from hbr.org by Marc Zao-Sanders

.

.


Here’s why you shouldn’t let AI run your company — from theneurondaily.com by Grant Harvey; emphasis DSC

When “vibe-coding” goes wrong… or, a parable in why you shouldn’t “vibe” your entire company.
Cursor, an AI-powered coding tool that many developers love-to-hate, face-planted spectacularly yesterday when its own AI support bot went off-script and fabricated a company policy, leading to a complete user revolt.

Here’s the short version:

  • A bug locked Cursor users out when switching devices.
  • Instead of human help, Cursor’s AI support bot confidently told users this was a new policy (it wasn’t).
  • No human checked the replies—big mistake.
  • The fake news spread, and devs canceled subscriptions en masse.
  • A Reddit thread about it got mysteriously nuked, fueling suspicion.

The reality? Just a bug, plus a bot hallucination… doing maximum damage.

Why it matters: This is what we’d call “vibe-companying”—blindly trusting AI with critical functions without human oversight.

Think about it like this: this was JUST a startup. If more big corporations continue to lay off entire departments, replaced by AI, these already byzantine companies will become increasingly more opaque, unaccountable systems where no one, human or AI, fully understands what’s happening or who’s responsible.

Our take? Kafka dude has it right. We need to pay attention to WHAT we’re actually automating. Because automating more bureaucracy at scale, with agents we increasingly don’t understand or don’t double check, can potentially make companies less intelligent—and harder to fix when things inevitably go wrong.


 

 

What does ‘age appropriate’ AI literacy look like in higher education? — from timeshighereducation.com by Fun Siong Lim
As AI literacy becomes an essential work skill, universities need to move beyond developing these competencies at ‘primary school’ level in their students. Here, Fun Siong Lim reflects on frameworks to support higher-order AI literacies

Like platforms developed at other universities, Project NALA offers a front-end interface (known as the builder) for faculty to create their own learning assistant. An idea we have is to open the builder up to students to allow them to create their own GenAI assistant as part of our AI literacy curriculum. As they design, configure and test their own assistant, they will learn firsthand how generative AI works. They get to test performance-enhancement approaches beyond prompt engineering, such as grounding the learning assistant with curated materials (retrieval-augmented generation) and advanced ideas such as incorporating knowledge graphs.

They should have the opportunity to analyse, evaluate and create responsible AI solutions. Offering students the opportunity to build their own AI assistants could be a way forward to develop these much-needed skills.


How to Use ChatGPT 4o’s Update to Turn Key Insights Into Clear Infographics (Prompts Included) — from evakeiffenheim.substack.com by Eva Keiffenheim
This 3-step workflow helps you break down books, reports, or slide-decks into professional visuals that accelerate understanding.

This article shows you how to find core ideas, prompt GPT-4o3 for a design brief, and generate clean, professional images that stick. These aren’t vague “creative visuals”—they’re structured for learning, memory, and action.

If you’re a lifelong learner, educator, creator, or just someone who wants to work smarter, this process is for you.

You’ll spend less time re-reading and more time understanding. And maybe—just maybe—you’ll build ideas that not only click in your brain, but also stick in someone else’s.


SchoolAI Secures $25 Million to Help Teachers and Schools Reach Every Student — from globenewswire.com
 The Classroom Experience platform gives every teacher and student their own AI tools for personalized learning

SchoolAI’s Classroom Experience platform combines AI assistants for teachers that help with classroom preparation and other administrative work, and Spaces–personalized AI tutors, games, and lessons that can adapt to each student’s unique learning style and interests. Together, these tools give teachers actionable insights into how students are doing, and how the teacher can deliver targeted support when it matters most.

“Teachers and schools are navigating hard challenges with shrinking budgets, teacher shortages, growing class sizes, and ongoing recovery from pandemic-related learning gaps,” said Caleb Hicks, founder and CEO of SchoolAI. “It’s harder than ever to understand how every student is really doing. Teachers deserve powerful tools to help extend their impact, not add to their workload. This funding helps us double down on connecting the dots for teachers and students, and later this year, bringing school administrators and parents at home onto the platform as well.”


AI in Education, Part 3: Looking Ahead – The Future of AI in Learning — from rdene915.com by Dr. Rachelle Dené Poth

In the first and second parts of my AI series, I focused on where we see AI in classrooms. Benefits range from personalized learning and accessibility tools to AI-driven grading and support of a teaching assistant. In Part 2, I chose to focus on some of the important considerations related to ethics that must be part of the conversation. Schools need to focus on data privacy, bias, overreliance, and the equity divide. I wanted to focus on the future for this last part in the current AI series. Where do we go from here?


Anthropic Education Report: How University Students Use Claude — from anthropic.com

The key findings from our Education Report are:

  • STEM students are early adopters of AI tools like Claude, with Computer Science students particularly overrepresented (accounting for 36.8% of students’ conversations while comprising only 5.4% of U.S. degrees). In contrast, Business, Health, and Humanities students show lower adoption rates relative to their enrollment numbers.
  • We identified four patterns by which students interact with AI, each of which were present in our data at approximately equal rates (each 23-29% of conversations): Direct Problem Solving, Direct Output Creation, Collaborative Problem Solving, and Collaborative Output Creation.
  • Students primarily use AI systems for creating (using information to learn something new) and analyzing (taking apart the known and identifying relationships), such as creating coding projects or analyzing law concepts. This aligns with higher-order cognitive functions on Bloom’s Taxonomy. This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.

From the Kuali Days 2025 Conference: A CEO’s View of Planning for AI — from campustechnology.com by Mary Grush
A Conversation with Joel Dehlin

How can a company serving higher education navigate the changes AI brings to the ed tech marketplace? What will customers expect in this dynamic? Here, CT talks with Kuali CEO Joel Dehlin, who shared his company’s AI strategies in a featured plenary session, “Sneak Peek of AI in Kuali Build,” at Kuali Days 2025 in Anaheim.


How students can use generative AI — from aliciabankhofer.substack.com by Alicia Bankhofer
Part 4 of 4 in my series on Teaching and Learning in the AI Age

This article is the culmination of a series exploring AI’s impact on education.

Part 1: What Educators Need outlined essential AI literacy skills for teachers, emphasizing the need to move beyond basic ChatGPT exploration to understand the full spectrum of AI tools available in education.

Part 2: What Students Need addressed how students require clear guidance to use AI safely, ethically, and responsibly, with emphasis on developing critical thinking skills alongside AI literacy.

Part 3: How Educators Can Use GenAI presented ten practical use cases for teachers, from creating differentiated resources to designing assessments, demonstrating how AI can reclaim 5-7 hours weekly for meaningful student interactions.

Part 4: How Students Can Use GenAI (this article) provides frameworks for guiding student AI use based on Joscha Falck’s dimensions: learning about, with, through, despite, and without AI.


Mapping a Multidimensional Framework for GenAI in Education — from er.educause.edu by Patricia Turner
Prompting careful dialogue through incisive questions can help chart a course through the ongoing storm of artificial intelligence.

The goal of this framework is to help faculty, educational developers, instructional designers, administrators, and others in higher education engage in productive discussions about the use of GenAI in teaching and learning. As others have noted, theoretical frameworks will need to be accompanied by research and teaching practice, each reinforcing and reshaping the others to create understandings that will inform the development of approaches to GenAI that are both ethical and maximally beneficial, while mitigating potential harms to those who engage with it.


Instructional Design Isn’t Dying — It’s Specialising — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, how AI is impacting role & purpose of Instructional Design

Together, these developments have revealed something important: despite widespread anxiety, the instructional design role isn’t dying—it’s specialising.

What we’re witnessing isn’t the automation of instructional design and the death of the instructional designer, but rather the evolution of the ID role into multiple distinct professional pathways.

The generalist “full stack” instructional designer is slowly but decisively fracturing into specialised roles that reflect both the capabilities of generative AI and the strategic imperatives facing modern organisations.

In this week’s blog post, I’ll share what I’ve learned about how our field is transforming, and what it likely means for you and your career path.

Those instructional designers who cling to traditional generalist models risk being replaced, but those who embrace specialisation, data fluency, and AI collaboration will excel and lead the next evolution of the field. Similarly, those businesses that continue to view L&D as a cost centre and focus on automating content delivery will be outperformed, while those that invest in building agile, AI-enabled learning ecosystems will drive measurable performance gains and secure their competitive advantage.


Adding AI to Every Step in Your eLearning Design Workflow — from learningguild.com by George Hanshaw

We know that eLearning is a staple of training and development. The expectations of the learners are higher than ever: They expect a dynamic, interactive, and personalized learning experience. As instructional designers, we are tasked with meeting these expectations by creating engaging and effective learning solutions.

The integration of Artificial Intelligence (AI) into our eLearning design process is a game-changer that can significantly enhance the quality and efficiency of our work.

No matter if you use ADDIE or rapid prototyping, AI has a fit in every aspect of your workflow. By integrating AI, you can ensure a more efficient and effective design process that adapts to the unique needs of your learners. This not only saves time and resources but also significantly enhances the overall learning experience. We will explore the needs analysis and the general design process.

 

The following resource was from Roberto Ferraro:

Micromanagement — from psychsafety.com by Jade Garratt

Psychological Safety and Micromanagement
Those who have followed our work at Psych Safety for a while will know that we believe exploring not just what to do – the behaviours and practices that support psychological safety – but also what to avoid can be hugely valuable. Understanding the behaviours that damage psychological safety, what not to do, and even what not to say can help us build better workplaces.

There are many behaviours that damage psychological safety, and one that almost always comes up in our workshops when discussing cultures of fear is micromanagement. So we thought it was time we explored micromanagement in more detail, considering how and why it damages psychological safety and what we can do instead.

Micromanagement is a particular approach to leadership where a manager exhibits overly controlling behaviours or an excessive and inappropriate focus on minor details. They might scrutinise their team’s work closely, insist on checking work, refrain from delegating, and limit the autonomy people need to do their jobs well. It can also manifest as an authoritarian leadership style, where decision-making is centralised (back to themselves) and employees have little say in their work.


From DSC:
I was fortunate to not have a manager who was a micromanager until my very last boss/supervisor of my career. But it was that particular manager who made me call it quits and leave the track. She demeaned me in front of others, and was extremely directive and controlling. She wanted constant check-ins and progress reports. And I could go on and on here. 

But suffice it to say that after having worked for several decades, that kind of manager was not what I was looking for. And you wouldn’t be either. By the way…my previous boss — at the same place — and I achieved a great deal in a very short time. She taught me a lot and was a great administrator, designer, professor, mentor, and friend. But that boss was moved to a different role as upper management/leadership changed. Then the micromanagement began after I reported to a different supervisor.

Anyway, don’t be a micromanager. If you are a recent graduate or are coming up on your graduation from college, learn that lesson now. No one likes to work for a micromanager. No one. It can make your employees’ lives miserable and do damage to their mental health, their enjoyment (or lack thereof) of work, and several other things that this article mentions. Instead, respect your employees. Trust your employees. Let them do their thing. See what they might need, then help meet those needs. Then get out of their way.


 

Organizing Teams for Continuous Learning: A Complete Guide — from intelligenthq.com

In today’s fast-paced business world, continuous learning has become a vital element for both individual and organizational growth. Teams that foster a culture of learning remain adaptable, innovative, and competitive. However, simply encouraging learning isn’t enough; the way teams are structured and supported plays a huge role in achieving long-term success. In this guide, we’ll explore how to effectively organize teams for continuous learning, leveraging tools, strategies, and best practices.

 

Reflections on “Are You Ready for the AI University? Everything is about to change.” [Latham]

.
Are You Ready for the AI University? Everything is about to change. — from chronicle.com by Scott Latham

Over the course of the next 10 years, AI-powered institutions will rise in the rankings. US News & World Report will factor a college’s AI capabilities into its calculations. Accrediting agencies will assess the degree of AI integration into pedagogy, research, and student life. Corporations will want to partner with universities that have demonstrated AI prowess. In short, we will see the emergence of the AI haves and have-nots.

What’s happening in higher education today has a name: creative destruction. The economist Joseph Schumpeter coined the term in 1942 to describe how innovation can transform industries. That typically happens when an industry has both a dysfunctional cost structure and a declining value proposition. Both are true of higher education.

Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.

In the near future, if a student misses class, they will be able watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.

Early and mid-career professors who hope to survive will need to adapt and learn how to work with AI. They will need to immerse themselves in research on AI and pedagogy and understand its effect on the classroom. 

From DSC:
I had a very difficult time deciding which excerpts to include. There were so many more excerpts for us to think about with this solid article. While I don’t agree with several things in it, EVERY professor, president, dean, and administrator working within higher education today needs to read this article and seriously consider what Scott Latham is saying.

Change is already here, but according to Scott, we haven’t seen anything yet. I agree with him and, as a futurist, one has to consider the potential scenarios that Scott lays out for AI’s creative destruction of what higher education may look like. Scott asserts that some significant and upcoming impacts will be experienced by faculty members, doctoral students, and graduate/teaching assistants (and Teaching & Learning Centers and IT Departments, I would add). But he doesn’t stop there. He brings in presidents, deans, and other members of the leadership teams out there.

There are a few places where Scott and I differ.

  • The foremost one is the importance of the human element — i.e., the human faculty member and students’ learning preferences. I think many (most?) students and lifelong learners will want to learn from a human being. IBM abandoned their 5-year, $100M ed push last year and one of the key conclusions was that people want to learn from — and with — other people:

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

— Satya Nitta, a longtime computer researcher at
IBM’s Watson
Research Center in Yorktown Heights, NY
.

By the way, it isn’t easy for me to write this. As I wanted AI and other related technologies to be able to do just what IBM was hoping that it would be able to do.

  • Also, I would use the term learning preferences where Scott uses the term learning styles.

Scott also mentions:

“In addition, faculty members will need to become technologists as much as scholars. They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course.”

It has been my experience from working with faculty members for over 20 years that not all faculty members want to become technologists. They may not have the time, interest, and/or aptitude to become one (and vice versa for technologists who likely won’t become faculty members).

That all said, Scott relays many things that I have reflected upon and relayed for years now via this Learning Ecosystems blog and also via The Learning from the Living [AI-Based Class] Room vision — the use of AI to offer personalized and job-relevant learning, the rising costs of higher education, the development of new learning-related offerings and credentials at far less expensive prices, the need to provide new business models and emerging technologies that are devoted more to lifelong learning, plus several other things.

So this article is definitely worth your time to read, especially if you are working in higher education or are considering a career therein!


Addendum later on 4/10/25:

U-M’s Ross School of Business, Google Public Sector launch virtual teaching assistant pilot program — from news.umich.edu by Jeff Karoub; via Paul Fain

Google Public Sector and the University of Michigan’s Ross School of Business have launched an advanced Virtual Teaching Assistant pilot program aimed at improving personalized learning and enlightening educators on artificial intelligence in the classroom.

The AI technology, aided by Google’s Gemini chatbot, provides students with all-hours access to support and self-directed learning. The Virtual TA represents the next generation of educational chatbots, serving as a sophisticated AI learning assistant that instructors can use to modify their specific lessons and teaching styles.

The Virtual TA facilitates self-paced learning for students, provides on-demand explanations of complex course concepts, guides them through problem-solving, and acts as a practice partner. It’s designed to foster critical thinking by never giving away answers, ensuring students actively work toward solutions.

 

The 2025 AI Index Report — from Stanford University’s Human-Centered Artificial Intelligence Lab (hai.stanford.edu); item via The Neuron

Top Takeaways

  1. AI performance on demanding benchmarks continues to improve.
  2. AI is increasingly embedded in everyday life.
  3. Business is all in on AI, fueling record investment and usage, as research continues to show strong productivity impacts.
  4. The U.S. still leads in producing top AI models—but China is closing the performance gap.
  5. The responsible AI ecosystem evolves—unevenly.
  6. Global AI optimism is rising—but deep regional divides remain.
  7. …and several more

Also see:

The Neuron’s take on this:

So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.

Just keep two big things in mind:

  1. Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
  2. Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.

So, use the tools, find what helps you, but don’t trust them completely.

We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.


Addendum on 4/16:

 

Job hunting and hiring in the age of AI: Where did all the humans go? — from washingtonpost.com by Taylor Telford
The proliferation of artificial intelligence tools and overreliance on software such as ChatGPT is making the job market increasingly surreal.

The speedy embrace of AI tools meant to make job hunting and hiring more efficient is causing headaches and sowing distrust in these processes, people on both sides of the equation say. While companies embrace AI recruiters and application scanning systems, many job seekers are trying to boost their odds with software that generates application materials, optimizes them for AI and applies to hundreds of jobs in minutes.

Meanwhile, recruiters and hiring managers are fielding more applicants than they can keep up with, yet contend that finding real, qualified workers amid the bots, cheaters and deepfakes is only getting tougher as candidates use AI to write their cover letters, bluff their way through interviews and even hide their identities.

“I’m pro-AI in the sense that it allows you to do things that were impossible before … but it is being misused wildly,” Freire said. The problem is “when you let it do the thinking for you, it goes from a superpower to a crutch very easily.”

 

Uplimit raises stakes in corporate learning with suite of AI agents that can train thousands of employees simultaneously — from venturebeat.com by Michael Nuñez|

Uplimit unveiled a suite of AI-powered learning agents today designed to help companies rapidly upskill employees while dramatically reducing administrative burdens traditionally associated with corporate training.

The San Francisco-based company announced three sets of purpose-built AI agents that promise to change how enterprises approach learning and development: skill-building agents, program management agents, and teaching assistant agents. The technology aims to address the growing skills gap as AI advances faster than most workforces can adapt.

“There is an unprecedented need for continuous learning—at a scale and speed traditional systems were never built to handle,” said Julia Stiglitz, CEO and co-founder of Uplimit, in an interview with VentureBeat. “The companies best positioned to thrive aren’t choosing between AI and their people—they’re investing in both.”


Introducing Claude for Education — from anthropic.com

Today we’re launching Claude for Education, a specialized version of Claude tailored for higher education institutions. This initiative equips universities to develop and implement AI-enabled approaches across teaching, learning, and administration—ensuring educators and students play a key role in actively shaping AI’s role in society.

As part of announcing Claude for Education, we’re introducing:

  1. Learning mode: A new Claude experience that guides students’ reasoning process rather than providing answers, helping develop critical thinking skills
  2. University-wide Claude availability: Full campus access agreements with Northeastern University, London School of Economics and Political Science (LSE), and Champlain College, making Claude available to all students
  3. Academic partnerships: Joining Internet2 and working with Instructure to embed AI into teaching & learning with Canvas LMS
  4. Student programs: A new Claude Campus Ambassadors program along with an initiative offering API credits for student projects

A comment on this from The Rundown AI:

Why it matters: Education continues to grapple with AI, but Anthropic is flipping the script by making the tech a partner in developing critical thinking rather than an answer engine. While the controversy over its use likely isn’t going away, this generation of students will have access to the most personalized, high-quality learning tools ever.


Should College Graduates Be AI Literate? — from chronicle.com by Beth McMurtrie (behind a paywall)
More institutions are saying yes. Persuading professors is only the first barrier they face.

Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.


Showing Up for the Future: Why Educators Can’t Sit Out the AI Conversation — from marcwatkins.substack.com with a guest post from Lew Ludwig

The Risk of Disengagement
Let’s be honest: most of us aren’t jumping headfirst into AI. At many of our institutions, it’s not a gold rush—it’s a quiet standoff. But the group I worry most about isn’t the early adopters. It’s the faculty who’ve decided to opt out altogether.

That choice often comes from a place of care. Concerns about data privacy, climate impact, exploitative labor, and the ethics of using large language models are real—and important. But choosing not to engage at all, even on ethical grounds, doesn’t remove us from the system. It just removes our voices from the conversation.

And without those voices, we risk letting others—those with very different priorities—make the decisions that shape what AI looks like in our classrooms, on our campuses, and in our broader culture of learning.



Turbocharge Your Professional Development with AI — from learningguild.com by Dr. RK Prasad

You’ve just mastered a few new eLearning authoring tools, and now AI is knocking on the door, offering to do your job faster, smarter, and without needing coffee breaks. Should you be worried? Or excited?

If you’re a Learning and Development (L&D) professional today, AI is more than just a buzzword—it’s transforming the way we design, deliver, and measure corporate training. But here’s the good news: AI isn’t here to replace you. It’s here to make you better at what you do.

The challenge is to harness its potential to build digital-ready talent, not just within your organization but within yourself.

Let’s explore how AI is reshaping L&D strategies and how you can leverage it for professional development.


5 Recent AI Notables — from automatedteach.com by Graham Clay

1. OpenAI’s New Image Generator
What Happened: OpenAI integrated a much more powerful image generator directly into GPT-4o, making it the default image creator in ChatGPT. Unlike previous image models, this one excels at accurately rendering text in images, precise visualization of diagrams/charts, and multi-turn image refinement through conversation.

Why It’s Big: For educators, this represents a significant advancement in creating educational visuals, infographics, diagrams, and other instructional materials with unprecedented accuracy and control. It’s not perfect, but you can now quickly generate custom illustrations that accurately display mathematical equations, chemical formulas, or process workflows — previously a significant hurdle in digital content creation — without requiring graphic design expertise or expensive software. This capability dramatically reduces the time between conceptualizing a visual aid and implementing it in course materials.
.


The 4 AI modes that will supercharge your workflow — from aiwithallie.beehiiv.com by Allie K. Miller
The framework most people and companies won’t discover until 2026


 

Investigating Informal Learning with Technology — from learningguild.com by Katie Belle (Curry) Nelson

Informal learning is having a moment right now, and it’s about time.

As learning professionals, we can often get caught up in designing, developing, and implementing formal learning experiences, which can cause informal learning to fall to the wayside and easily be overlooked. However, informal learning experiences can have major, long-term effects on learning and business outcomes, so finding creative ways to track them can be valuable for L&D departments.

Start small
These three methods are small steps to understanding the informal learning environment and its impact on your organization. The assuring thing about informal learning is that you can start small and incorporate more methods later because formal learning is always taking place. Start with one area and begin to explore what you can find out the content your learners want to know more about, how they are learning about things, and how others in the organization are solving problems.

 

It’s the end of work as we knew it
and I feel…

powerless to fight the technology that we pioneered
nostalgic for a world that moved on without us
after decades of paying our dues
for a payday that never came
…so yeah
not exactly fine.


The Gen X Career Meltdown — from nytimes.com by Steeven Kurutz (DSC: This is a gifted article for you)
Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.

If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.

“I am having conversations every day with people whose careers are sort of over,” said Chris Wilcha, a 53-year-old film and TV director in Los Angeles.

Talk with people in their late 40s and 50s who once imagined they would be able to achieve great heights — or at least a solid career while flexing their creative muscles — and you are likely to hear about the photographer whose work dried up, the designer who can’t get hired or the magazine journalist who isn’t doing much of anything.

In the wake of the influencers comes another threat, artificial intelligence, which seems likely to replace many of the remaining Gen X copywriters, photographers and designers. By 2030, ad agencies in the United States will lose 32,000 jobs, or 7.5 percent of the industry’s work force, to the technology, according to the research firm Forrester.


From DSC:
This article reminds me of how tough it is to navigate change in our lives. For me, it was often due to the fact that I was working with technologies. Being a technologist can be difficult, especially as one gets older and faces age discrimination in a variety of industries. You need to pick the right technologies and the directions that will last (for me it was email, videoconferencing, the Internet, online-based education/training, discovering/implementing instructional technologies, and becoming a futurist).

For you younger folks out there — especially students within K-16 — aim to develop a perspective and a skillset that is all about adapting to change. You will likely need to reinvent yourself and/or pick up new skills over your working years. You are most assuredly required to be a lifelong learner now. That’s why I have been pushing for school systems to be more concerned with providing more choice and control to students — so that students actually like school and enjoy learning about new things.


 

 




Students and folks looking for work may want to check out:

Also relevant/see:


 

Cultivating Speaking and Listening Skills in the Primary Grades — from edutopia.org by Rachel Scheer
Early elementary teachers can use these strategies to help students improve their oral communication skills.

The good news? There are many engaging and effective strategies to develop these interpersonal skills, and most are easy to incorporate into daily classroom routines. I use the strategies below to directly teach, model, and practice these essential communication skills at a developmentally appropriate level: turn-taking, small group speaking and listening, whole group speaking and listening, and accountable talk.

From DSC:
I love the parts about practicing how to LISTEN. We need more of that in our communications with one another…as well as when we are praying to God.


Teaching Students About Corporate Influences in a Curriculum — from edutopia.org by Elaine Alvey
By uncovering any hidden interests in a curriculum, teachers can open important discussions about media literacy with students.

These instances underscore the need for educators to be vigilant in vetting materials, recognizing that even seemingly reputable sources can harbor hidden agendas, necessitating a robust approach to media and information literacy both for ourselves and for students.

How to Spot Corporate Influences in Your Curriculum
So, how do we, as educators, navigate this minefield? Media literacy strategies offer important tools to equip ourselves and our students to analyze information landscapes intentionally, including the curricular resources we evaluate for use in our classrooms.
.

From DSC:
I would encourage you to take a look at the work my sister Sue Ellen Christian has been doing re: media literacy, news literacy, and more. She created the Wonder Media website to discuss those topics. Plus she collaborated with several other people and organizations to develop a large, professionally-done exhibit re: these important topics.
.


Boosting Engagement in World Language Classes With Games — from edutopia.org by Rachelle Dené Poth
Middle school teachers can use a variety of tech and no-tech games to help students build skills in the target language.

As a world language educator, I’ve always sought innovative ways to engage my students through meaningful learning experiences as they build their language skills. One way we do this is through gameplay. The benefits of games go far beyond simply learning and increased retention of vocabulary or grammar. Games can also foster collaboration, critical thinking, creativity, and problem-solving skills, making learning fun for students.


How Administrators Can Respond—Instead of React—in Tough Situations — from edutopia.org by Jessica Cabeen
These strategies can help school leaders stay self-regulated in the middle of frustrating and stressful moments.

Conversations That Might Be Better Left for Later
Not every conversation needs to happen in the heat of the moment. Some of the most productive conversations happen after we’ve given ourselves time to regulate. Here are a few categories of conversations that might benefit from a pause:

  • Difficult feedback conversations. If emotions are running high, it might be best to wait until you can approach the discussion with clarity and empathy. A rushed or reactive conversation can shut down dialogue rather than encourage growth.
  • Conflict resolution. When two parties are upset, stepping in immediately to mediate can sometimes escalate tensions. A brief pause allows for perspective-taking and a calmer, solution-oriented approach.
  • Big-picture decisions. When stress is high, it’s easy to make decisions based on immediate pressures rather than long-term goals. Giving yourself space to step back ensures that decisions align with your leadership vision.
  • Personal or emotional responses. If you feel personally triggered by a comment, criticism, or situation, take time to process before responding. Self-awareness in these moments can prevent regretful words or actions.

So the next time frustration creeps in, take a breath. Pause before you speak, type, or react. Because more often than not, the best response isn’t the fastest one—it’s the one that comes from a place of clarity, patience, and purpose.

 
© 2025 | Daniel Christian