Google Workspace enables the future of AI-powered work for every business  — from workspace.google.com

The following AI capabilities will start rolling out to Google Workspace Business customers today and to Enterprise customers later this month:

  • Get AI assistance in Gmail, Docs, Sheets, Meet, Chat, Vids, and more: Do your best work faster with AI embedded in the tools you use every day. Gemini streamlines your communications by helping you summarize, draft, and find information in your emails, chats, and files. It can be a thought partner and source of inspiration, helping you create professional documents, slides, spreadsheets, and videos from scratch. Gemini can even improve your meetings by taking notes, enhancing your audio and video, and catching you up on the conversation if you join late.
  • Chat with Gemini Advanced, Google’s next-gen AI: Kickstart learning, brainstorming, and planning with the Gemini app on your laptop or mobile device. Gemini Advanced can help you tackle complex projects including coding, research, and data analysis and lets you build Gems, your team of AI experts to help with repeatable or specialized tasks.
  • Unlock the power of NotebookLM PlusWe’re bringing the revolutionary AI research assistant to every employee, to help them make sense of complex topics. Upload sources to get instant insights and Audio Overviews, then share customized notebooks with the team to accelerate their learning and onboarding.

And per Evelyn from the Stay Ahead newsletter (at FlexOS)

Google’s Gemini AI is stepping up its game in Google Workspace, bringing powerful new capabilities to your favorite tools like Gmail, Docs, and Sheets:

  • AI-Powered Summaries: Get concise, AI-generated summaries of long emails and documents so you can focus on what matters most.
  • Smart Reply: Gemini now offers context-aware email replies that feel more natural and tailored to your style.
  • Slides and images generation: Gemini in Slides can help you generate new images, summarize your slides, write and rewrite content, and refer to existing Drive files and/or emails.
  • Automated Data Insights: In Google Sheets, Gemini helps create a task tracker, conference agenda, spot trends, suggest formulas, and even build charts with simple prompts.
  • Intelligent Drafting: Google Docs now gets a creativity boost, helping you draft reports, proposals, or blog posts with AI suggestions and outlines.
  • Meeting Assistance: Say goodbye to the awkward AI attendees to help you take notes, now Gemini can natively do that for you – no interruption, no avatar, and no extra attendee. Meet can now also automatically generate captions to lower the language barrier.

Eveyln (from FlexOS) also mentions that CoPilot is getting enhancements too:

Copilot is now included in Microsoft 365 Personal and Family — from microsoft.com

Per Evelyn:

It’s exactly what we predicted: stand-alone AI apps like note-takers and image generators have had their moment, but as the tech giants step in, they’re bringing these features directly into their ecosystems, making them harder to ignore.


Announcing The Stargate Project — from openai.com

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.


Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Adobe’s new AI tool can edit 10,000 images in one click — from theverge.com by  Jess Weatherbed
Firefly Bulk Create can automatically remove, replace, or extend image backgrounds in huge batches.

Adobe is launching new generative AI tools that can automate labor-intensive production tasks like editing large batches of images and translating video presentations. The most notable is “Firefly Bulk Create,” an app that allows users to quickly resize up to 10,000 images or replace all of their backgrounds in a single click instead of tediously editing each picture individually.

 

Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts

We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)

If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.


Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025

AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:

  1. Break your own AI policies before you implement them.
  2. Fund your failures.
  3. Resist the pilot program. …
  4. Host Anti-Tech Tech Talks
  5. …+ several more tips

While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.



Maria Anderson responded to Clay’s posting with this idea:

Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.



Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That’s where the new Grammarly feature comes in.

The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”


AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.

Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.


Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

 

‘Lazy and Mediocre’ HR Team Fired After Manager’s Own CV Gets Auto-Rejected in Seconds, Exposing System Failure — from ibtimes.co.uk by Vinay Patel
The automated system’s error highlights the potential for bias and inefficiency in technology-driven HR practices

An entire HR team was terminated after their manager discovered and confirmed that their system automatically rejected all candidates — including his own application.

The manager wrote in their comment, “Auto rejection systems from HR make me angry.” They explained that while searching for a new employee, their HR department could not find a single qualified candidate in three months. As expected, the suspicious manager decided to investigate.

“I created myself a new email and sent them a modified version of my CV with a fake name to see what was going on with the process,” they wrote. “And guess what, I got auto-rejected. HR didn’t even look at my CV.”

When the manager reported the issue to upper management, “half of the HR department was fired in the following weeks.” A typographical error with significant consequences caused the entire problem.

The manager works in the tech industry and was trying to hire developers. However, HR had set up the system to search for developers with expertise in the wrong development software and one that no longer exists.

From DSC:
Back in 2017, I had survived several rounds of layoffs at the then Calvin College (now Calvin University) but I didn’t survive the layoff of 12 people in the spring of 2017. I hadn’t needed to interview for a new job in quite a while. So boy, did I get a wake-up call with discovering that Applicant Tracking Systems existed and could be tough to get past. (Also, the old-school job replacement firm that Calvin hired wasn’t much help in dealing with them either.)

I didn’t like these ATSs then, and I still have my concerns about them now. The above article points out that my concerns were/are at least somewhat founded. And if you take the entire day to research and apply for a position — only to get an instant reply back from the ATS — it’s very frustrating and discouraging. 

Plus the ATSs may not pick up on nuances. An experienced human being might be able to see that a candidate’s skills are highly relevant and/or transferable to the position that they’re hiring for. 

Networking is key of course. But not everyone has been taught about networking and not everyone gets past the ATS to get their resume viewed by a pair of human eyes. HR, IT, and any other relevant groups here need to be very careful with programming their ATSs.

 

AI Is Unavoidable, Not Inevitable — from marcwatkins.substack.com by Marc Watkins

I had the privilege of moderating a discussion between Josh Eyler and Robert Cummings about the future of AI in education at the University of Mississippi’s recent AI Winter Institute for Teachers. I work alongside both in faculty development here at the University of Mississippi. Josh’s position on AI sparked a great deal of debate on social media:

To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of “it’s very complicated.”

Most importantly, we all deserve some grace here. Dealing with generative AI in education isn’t something any of us asked for. It isn’t normal. It isn’t fixable by purchasing a tool or telling faculty to simply ‘prefer not to’ use AI. It is and will remain unavoidable for virtually every discipline taught at our institutions.

If one good thing happens because of generative AI let it be that it helps us clearly see how truly complicated our existing relationships with machines are now. As painful as this moment is, it might be what we need to help prepare us for a future where machines that mimic reasoning and human emotion refuse to be ignored.


“AI tutoring shows stunning results.”
See below article.


From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time — from blogs.worldbank.org by Martín E. De Simone, Federico Tiberti, Wuraola Mosuro, Federico Manolio, Maria Barron, and Eliot Dikoru

Learning gains were striking
The learning improvements were striking—about 0.3 standard deviations. To put this into perspective, this is equivalent to nearly two years of typical learning in just six weeks. When we compared these results to a database of education interventions studied through randomized controlled trials in the developing world, our program outperformed 80% of them, including some of the most cost-effective strategies like structured pedagogy and teaching at the right level. This achievement is particularly remarkable given the short duration of the program and the likelihood that our evaluation design underestimated the true impact.

Our evaluation demonstrates the transformative potential of generative AI in classrooms, especially in developing contexts. To our knowledge, this is the first study to assess the impact of generative AI as a virtual tutor in such settings, building on promising evidence from other contexts and formats; for example, on AI in coding classes, AI and learning in one school in Turkey, teaching math with AI (an example through WhatsApp in Ghana), and AI as a homework tutor.

Comments on this article from The Rundown AI:

Why it matters: This represents one of the first rigorous studies showing major real-world impacts in a developing nation. The key appears to be using AI as a complement to teachers rather than a replacement — and results suggest that AI tutoring could help address the global learning crisis, particularly in regions with teacher shortages.


Other items re: AI in our learning ecosystems:

  • Will AI revolutionise marking? — from timeshighereducation.com by Rohim Mohammed
    Artificial intelligence has the potential to improve speed, consistency and detail in feedback for educators grading students’ assignments, writes Rohim Mohammed. Here he lists the pros and cons based on his experience
  • Marty the Robot: Your Classroom’s AI Companion — from rdene915.com by Dr. Rachelle Dené Poth
  • Generative Artificial Intelligence: Cautiously Recognizing Educational Opportunities — from scholarlyteacher.com by Todd Zakrajsek, University of North Carolina at Chapel Hill
  • Personal AI — from michelleweise.substack.com by Dr. Michelle Weise
    “Personalized” Doesn’t Have To Be a Buzzword
    Today, however, is a different kind of moment. GenAI is now rapidly evolving to the point where we may be able to imagine a new way forward. We can begin to imagine solutions truly tailored for each of us as individuals, our own personal AI (pAI). pAI could unify various silos of information to construct far richer and more holistic and dynamic views of ourselves as long-life learners. A pAI could become our own personal career navigator, skills coach, and storytelling agent. Three particular areas emerge when we think about tapping into the richness of our own data:

    • Personalized Learning Pathways & Dynamic Skill Assessment: …
    • Storytelling for Employers:…
    • Ongoing Mentorship and Feedback: …
  • Speak — a language learning app — via The Neuron

 

The Rise of the Heretical Leader — from ditchthattextbook.com; a guest post by Dan Fitzpatrick

Now is the time for visionary leadership in education. The era of artificial intelligence is reshaping the demands on education systems. Rigid policies, outdated curricula, and reliance on obsolete metrics are failing students. A recent survey from Resume Genius found that graduates lack skills in communication, collaboration, and critical thinking. Consequently, there is a growing trend in companies hiring candidates based on skills instead of traditional education or work experience. This underscores the urgent need for educational leaders to prioritize adaptability and innovation in their systems. Educational leaders must embrace a transformative approach to keep pace.

[Heretical leaders] bring courage, empathy, and strategic thinking to reimagine education’s potential. Here are their defining characteristics:

  • Visionary Thinking: They identify bold, innovative paths to progress.
  • Courage to Act: These leaders take calculated risks to overcome resistance and inertia.
  • Relentless Curiosity: They challenge assumptions and seek better alternatives.
  • Empathy for Stakeholders: Understanding the personal impact of change allows them to lead with compassion.
  • Strategic Disruption: Their deliberate actions ensure systemic improvements.
    These qualities enable Heretical leaders to reframe challenges as opportunities and drive meaningful change.

From DSC:
Readers of this blog will recognize that I believe visionary leadership is extremely important — in all areas of our society, but especially within our learning ecosystems. Vision trumps data, at least in my mind. There are times when data can be used to support a vision, but having a powerful vision is more lasting and impactful than relying on data to drive the organization.

So while I’d vote for a different term other than “heretical leaders,” I get what Dan is saying and I agree with him. Such leaders are going against the grain. They are swimming upstream. They are espousing perspectives that others often don’t buy into (at least initially or for some time). 

Such were the leaders who introduced online learning into the K-16 educational systems back in the late ’90s and into the next two+ decades. The growth of online-based learning continues and has helped educate millions of people. Those leaders and the people who worked for such endeavors were going against the grain.

We haven’t seen the end point of online-based learning. I think it will become even more powerful and impactful when AI is used to determine which jobs are opening up, and which skills are needed for those jobs, and then provide a listing of sources of where one can obtain that knowledge and develop those skills. People will be key in this vision. But so will AI and personalized learning. It will be a collaborative effort.

By the way, I am NOT advocating for using AI to outsource our thinking. Also, having basic facts and background knowledge in a domain is critically important, especially to use AI effectively. But we should be teaching students about AI (as we learn more about it ourselves). We should be working collaboratively with our students to understand how best to use AI. It’s their futures at stake.


 


ChatGPT can now handle reminders and to-dos — from theverge.com by Kylie Robison
The AI chatbot can now set reminders and perform recurring actions.

OpenAI is launching a new beta feature in ChatGPT called Tasks that lets users schedule future actions and reminders.

The feature, which is rolling out to Plus, Team, and Pro subscribers starting today, is an attempt to make the chatbot into something closer to a traditional digital assistant — think Google Assistant or Siri but with ChatGPT’s more advanced language capabilities.


ChatGPT gets proactive with ‘Tasks’ — from therundown.ai by Rowan Cheung
PLUS: Minimax’s LLM context-length breakthrough

The Rundown: OpenAI is rolling out Tasks, a new ChatGPT beta feature that allows users to schedule reminders and recurring actions, marking the company’s first step into agentic AI capabilities.

Why it matters: While reminders aren’t groundbreaking, Tasks lays the groundwork for incorporating agentic abilities into ChatGPT, which will likely gain value once integrated with other features like tool or computer use. With ‘Operator’ also rumored to be coming this month, all signs are pointing towards 2025 being the year of the AI agent.


 

NVIDIA Partners With Industry Leaders to Advance Genomics, Drug Discovery and Healthcare — from nvidianews.nvidia.com
IQVIA, Illumina, Mayo Clinic and Arc Institute Harness NVIDIA AI and Accelerated Computing to Transform $10 Trillion Healthcare and Life Sciences Industry

J.P. Morgan Healthcare Conference—NVIDIA today announced new partnerships to transform the $10 trillion healthcare and life sciences industry by accelerating drug discovery, enhancing genomic research and pioneering advanced healthcare services with agentic and generative AI.

The convergence of AI, accelerated computing and biological data is turning healthcare into the largest technology industry. Healthcare leaders IQVIA, Illumina and Mayo Clinic, as well as Arc Institute, are using the latest NVIDIA technologies to develop solutions that will help advance human health.

These solutions include AI agents that can speed clinical trials by reducing administrative burden, AI models that learn from biology instruments to advance drug discovery and digital pathology, and physical AI robots for surgery, patient monitoring and operations. AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.


AI could transform health care, but will it live up to the hype? — from sciencenews.org by Meghan Rosen and Tina Hesman Saey
The technology has the potential to improve lives, but hurdles and questions remain

True progress in transforming health care will require solutions across the political, scientific and medical sectors. But new forms of artificial intelligence have the potential to help. Innovators are racing to deploy AI technologies to make health care more effective, equitable and humane.

AI could spot cancer early, design lifesaving drugs, assist doctors in surgery and even peer into people’s futures to predict and prevent disease. The potential to help people live longer, healthier lives is vast. But physicians and researchers must overcome a legion of challenges to harness AI’s potential.


HHS publishes AI Strategic Plan, with guidance for healthcare, public health, human services — from healthcareitnews.com by Mike Miliard
The framework explores ways to spur innovation and adoption, enable more trustworthy model development, promote access and foster AI-empowered healthcare workforces.

The U.S. Department of Health and Human Services has issued its HHS Artificial Intelligence Strategic Plan, which the agency says will “set in motion a coordinated public-private approach to improving the quality, safety, efficiency, accessibility, equitability and outcomes in health and human services through the innovative, safe, and responsible use of AI.”


How Journalism Will Adapt in the Age of AI — from bloomberg.com/ by John Micklethwait
The news business is facing its next enormous challenge. Here are eight reasons to be both optimistic and paranoid.

AI promises to get under the hood of our industry — to change the way we write and edit stories. It will challenge us, just like it is challenging other knowledge workers like lawyers, scriptwriters and accountants.

Most journalists love AI when it helps them uncover Iranian oil smuggling. Investigative journalism is not hard to sell to a newsroom. The second example is a little harder. Over the past month we have started testing AI-driven summaries for some longer stories on the Bloomberg Terminal.

The software reads the story and produces three bullet points. Customers like it — they can quickly see what any story is about. Journalists are more suspicious. Reporters worry that people will just read the summary rather than their story.

So, looking into our laboratory, what do I think will happen in the Age of AI? Here are eight predictions.


‘IT will become the HR of AI agents’, says Nvidia’s CEO: How should organisations respond? — from hrsea.economictimes.indiatimes.com by Vanshika Rastogi

Nvidia’s CEO, Jensen Huang’s recent statement “IT will become the HR of AI agents” continues to spark debate about IT’s evolving role in managing AI systems. As AI tools become integral, IT teams will take on tasks like training and optimising AI agents, blending technical and HR responsibilities. So, how should organisations respond to this transformation?

 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

10 Higher Ed Trends to Watch In 2025 — from insidetrack.org

While “polarization” was Merriam-Webster’s word of the year for 2024, we have some early frontrunners for 2025 — especially when it comes to higher education. Change. Agility. Uncertainty. Flexibility. As we take a deep dive into the trends on tap for higher education in the coming year, it’s important to note that, with an incoming administration who has vowed to shake things up, the current postsecondary system could be turned on its head. With that in mind, we wade into our yearly look at the topics and trends that will be making headlines — and making waves — in the year ahead.

#Highereducation #learningecosystems #change #trends #businessmodels #trends #onlinelearning #AI #DEI #skillsbasedlearning #skills #alternatives #LearningandEmploymentRecords #LERs #valueofhighereducation #GenAI

 

The Best of AI 2024: Top Winners Across 9 Categories — from aiwithallie.beehiiv.com by Allie Miller
2025 will be our weirdest year in AI yet. Read this so you’re more prepared.


Top AI Tools of 2024 — from ai-supremacy.com by Michael Spencer (behind a paywall)
Which AI tools stood out for me in 2024? My list.

Memorable AI Tools of 2024
Catergories included:

  • Useful
  • Popular
  • Captures the zeighest of AI product innovation
  • Fun to try
  • Personally satisfying
  1. NotebookLM
  2. Perplexity
  3. Claude

New “best” AI tool? Really? — from theneurondaily.com by Noah and Grant
PLUS: A free workaround to the “best” new AI…

What is Google’s Deep Research tool, and is it really “the best” AI research tool out there?

Here’s how it works: Think of Deep Research as a research team that can simultaneously analyze 50+ websites, compile findings, and create comprehensive reports—complete with citations.

Unlike asking ChatGPT to research for you, Deep Research shows you its research plan before executing, letting you edit the approach to get exactly what you need.

It’s currently free for the first month (though it’ll eventually be $20/month) when bundled with Gemini Advanced. Then again, Perplexity is always free…just saying.

We couldn’t just take J-Cal’s word for it, so we rounded up some other takes:

Our take: We then compared Perplexity, ChatGPT Search, and Deep Research (which we’re calling DR, or “The Docta” for short) on robot capabilities from CES revealed:


An excerpt from today’s Morning Edition from Bloomberg

Global banks will cut as many as 200,000 jobs in the next three to five years—a net 3% of the workforce—as AI takes on more tasks, according to a Bloomberg Intelligence survey. Back, middle office and operations are most at risk. A reminder that Citi said last year that AI is likely to replace more jobs in banking than in any other sector. JPMorgan had a more optimistic view (from an employee perspective, at any rate), saying its AI rollout has augmented, not replaced, jobs so far.


 

 

How Generative AI Is Shaping the Future of Law: Challenges and Trends in the Legal Profession — from thomsonreuters.com by Raghu Ramanathan

With this mind, Thomson Reuters and Lexpert hosted a panel featuring law firm leaders and industry experts discussing the challenges and trends around the use of generative AI in the legal profession.?Below are insights from an engaging and informative discussion.

Sections included:

  • Lawyers are excited to implement generative AI solutions
  • Unfounded concerns about robot lawyers
  • Changing billing practices and elevating services
  • Managing and mitigating risks

Adopting Legal Technology Responsibly — from lexology.com by Sacha Kirk

Here are fundamental principles to guide the process:

  1. Start with a Needs Assessment…
  2. Engage Stakeholders Early…
  3. Choose Scalable Solutions…
  4. Prioritise Security and Compliance…
  5. Plan for Change Management…

Modernizing Legal Workflows: The Role Of AI, Automation, And Strategic Partnerships — from abovethelaw.com by Scott Angelo, Jared Gullbergh, Nancy Griffing, and Michael Owen Hill
A roadmap for law firms.  

Angelo added, “We really doubled down on AI because it was just so new — not just to the legal industry, but to the world.” Under his leadership, Buchanan’s efforts to embrace AI have garnered significant attention, earning the firm recognition as one of the “Best of the Best for Generative AI” in the 2024 BTI “Leading Edge Law Firms” survey.

This acknowledgment reflects more than ambition; it highlights the firm’s ability to translate innovative ideas into actionable results. By focusing on collaboration and leveraging technology to address client demands, Buchanan has set a benchmark for what is possible in legal technology innovation.

The collective team followed these essential steps for app development:

  • Identify and Prioritize Use Cases…
  • Define App Requirements…
  • Leverage Pre-Built Studio Apps and Templates…
  • Incorporate AI and Automation…
  • Test and Iterate…
  • Deploy and Train…
  • Measure Success…

Navigating Generative AI in Legal Practice — from linkedin.com by Colin Levy

The rise of artificial intelligence (AI), particularly generative AI, has introduced transformative potential to legal practice. For in-house counsel, managing legal risk while driving operational efficiency increasingly involves navigating AI’s opportunities and challenges. While AI offers remarkable tools for automation and data-driven decision-making, it is essential to approach these tools as complementary to human judgment, not replacements. Effective AI adoption requires balancing its efficiencies with a commitment to ethical, nuanced legal practice.

Here a few ways in which this arises:

 

Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

In other words, individual learning leaders need to obtain information from surveys and studies that are directly useful in their curriculum planning. This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?


The Future of Workplace Learning: Adaptive Strategies for Navigating Change — from learningguild.com by Rachel Rosenfeldt

The Importance of Building a ‘Change Muscle’
The ability to test and learn, pivot quickly, and embrace change is an increasingly foundational skill that all employees, no matter the level of experience or seniority, need in 2025 and beyond. Adaptable organizations significantly outperform more change-averse peers on nearly every metric, ranging from revenue growth to employee engagement. In other words, having agility and adaptability embedded in your culture pays dividends. Although these terms are often used interchangeably, they represent distinct yet interconnected aspects of organizational success:

  • Agility refers to the ability to swiftly and efficiently respond to immediate challenges or opportunities. It’s about being nimble and proactive, making quick decisions, and adjusting to navigate short-term obstacles.
  • Adaptability is a broader concept that encompasses the capacity to evolve and thrive in the face of long-term shifts in the environment. It’s about being resilient and flexible by modifying strategies and structures to align with fundamental changes in the market or industry.

And a quick comment from DSC:


Addressing Skills Gaps in Enterprise L&D: A High-Level Overview — from learningguild.com by Bill Brandon

Employees’ skills and abilities must match the skills and abilities required for their jobs; when they do, organizational performance and productivity improve.

Skills gaps occur when there are mismatches between employees’ skills and capabilities and the skills and capabilities needed for their work. As technology and work become more complex, identifying and correcting skills gaps become essential to optimizing employee performance.

This article discusses various methods involving skills inference and predictive analytics in addition to traditional methods to pinpoint and prevent skills gaps.


A Practical Framework for Microlearning Success: A Guide for Learning Leaders — from by Robyn A. Defelice, PhD

Another year, another opportunity to bring microlearning into your performance and talent development strategy! This is especially appealing as more and more organizations strive to deliver training in ways that meet the fast-paced needs of their employees.

However, implementing a microlearning strategy that aligns with organizational outcomes and sustains performance is no small feat. Learning and Development (L&D) leaders often grapple with questions like: Where do we start; How do we ensure our efforts are effective; and What factors should we evaluate?

The Microlearning Effectiveness (MLE) Framework offers a practical approach to addressing these challenges. Instead of rigid rules, the framework acts as a guide, encouraging leaders to evaluate their efforts against six key components:

  • Goals or measurable outcomes
  • Purpose
  • Potential
  • Evaluation
  • Implementation
  • Distributed practice
 

NVIDIA’s Apple moment?! — from theneurondaily.com by Noah Edelman and Grant Harvey
PLUS: How to level up your AI workflows for 2025…

NVIDIA wants to put an AI supercomputer on your desk (and it only costs $3,000).

And last night at CES 2025, Jensen Huang announced phase two of this plan: Project DIGITS, a $3K personal AI supercomputer that runs 200B parameter models from your desk. Guess we now know why Apple recently developed an NVIDIA allergy

But NVIDIA doesn’t just want its “Apple PC moment”… it also wants its OpenAI moment. NVIDIA also announced Cosmos, a platform for building physical AI (think: robots and self-driving cars)—which Jensen Huang calls “the ChatGPT moment for robotics.”


Jensen Huang’s latest CES speech: AI Agents are expected to become the next robotics industry, with a scale reaching trillions of dollars — from chaincatcher.com

NVIDIA is bringing AI from the cloud to personal devices and enterprises, covering all computing needs from developers to ordinary users.

At CES 2025, which opened this morning, NVIDIA founder and CEO Jensen Huang delivered a milestone keynote speech, revealing the future of AI and computing. From the core token concept of generative AI to the launch of the new Blackwell architecture GPU, and the AI-driven digital future, this speech will profoundly impact the entire industry from a cross-disciplinary perspective.

Also see:


NVIDIA Project DIGITS: The World’s Smallest AI Supercomputer. — from nvidia.com
A Grace Blackwell AI Supercomputer on your desk.


From DSC:
I’m posting this next item (involving Samsung) as it relates to how TVs continue to change within our living rooms. AI is finding its way into our TVs…the ramifications of this remain to be seen.


OpenAI ‘now knows how to build AGI’ — from therundown.ai by Rowan Cheung
PLUS: AI phishing achieves alarming success rates

The Rundown: Samsung revealed its new “AI for All” tagline at CES 2025, introducing a comprehensive suite of new AI features and products across its entire ecosystem — including new AI-powered TVs, appliances, PCs, and more.

The details:

  • Vision AI brings features like real-time translation, the ability to adapt to user preferences, AI upscaling, and instant content summaries to Samsung TVs.
  • Several of Samsung’s new Smart TVs will also have Microsoft Copilot built in, while also teasing a potential AI partnership with Google.
  • Samsung also announced the new line of Galaxy Book5 AI PCs, with new capabilities like AI-powered search and photo editing.
  • AI is also being infused into Samsung’s laundry appliances, art frames, home security equipment, and other devices within its SmartThings ecosystem.

Why it matters: Samsung’s web of products are getting the AI treatment — and we’re about to be surrounded by AI-infused appliances in every aspect of our lives. The edge will be the ability to sync it all together under one central hub, which could position Samsung as the go-to for the inevitable transition from smart to AI-powered homes.

***

“Samsung sees TVs not as one-directional devices for passive consumption but as interactive, intelligent partners that adapt to your needs,” said SW Yong, President and Head of Visual Display Business at Samsung Electronics. “With Samsung Vision AI, we’re reimagining what screens can do, connecting entertainment, personalization, and lifestyle solutions into one seamless experience to simplify your life.”from Samsung


Understanding And Preparing For The 7 Levels Of AI Agents — from forbes.com by Douglas B. Laney

The following framework I offer for defining, understanding, and preparing for agentic AI blends foundational work in computer science with insights from cognitive psychology and speculative philosophy. Each of the seven levels represents a step-change in technology, capability, and autonomy. The framework expresses increasing opportunities to innovate, thrive, and transform in a data-fueled and AI-driven digital economy.


The Rise of AI Agents and Data-Driven Decisions — from devprojournal.com by Mike Monocello
Fueled by generative AI and machine learning advancements, we’re witnessing a paradigm shift in how businesses operate and make decisions.

AI Agents Enhance Generative AI’s Impact
Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, predicts a significant leap forward in generative AI. “In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.”


Here’s what nobody is telling you about AI agents in 2025 — from aidisruptor.ai by Alex McFarland
What’s really coming (and how to prepare). 

Everyone’s talking about the potential of AI agents in 2025 (and don’t get me wrong, it’s really significant), but there’s a crucial detail that keeps getting overlooked: the gap between current capabilities and practical reliability.

Here’s the reality check that most predictions miss: AI agents currently operate at about 80% accuracy (according to Microsoft’s AI CEO). Sounds impressive, right? But here’s the thing – for businesses and users to actually trust these systems with meaningful tasks, we need 99% reliability. That’s not just a 19% gap – it’s the difference between an interesting tech demo and a business-critical tool.

This matters because it completely changes how we should think about AI agents in 2025. While major players like Microsoft, Google, and Amazon are pouring billions into development, they’re all facing the same fundamental challenge – making them work reliably enough that you can actually trust them with your business processes.

Think about it this way: Would you trust an assistant who gets things wrong 20% of the time? Probably not. But would you trust one who makes a mistake only 1% of the time, especially if they could handle repetitive tasks across your entire workflow? That’s a completely different conversation.


Why 2025 will be the year of AI orchestration — from venturebeat.com by Emilia David|

In the tech world, we like to label periods as the year of (insert milestone here). This past year (2024) was a year of broader experimentation in AI and, of course, agentic use cases.

As 2025 opens, VentureBeat spoke to industry analysts and IT decision-makers to see what the year might bring. For many, 2025 will be the year of agents, when all the pilot programs, experiments and new AI use cases converge into something resembling a return on investment.

In addition, the experts VentureBeat spoke to see 2025 as the year AI orchestration will play a bigger role in the enterprise. Organizations plan to make management of AI applications and agents much more straightforward.

Here are some themes we expect to see more in 2025.


Predictions For AI In 2025: Entrepreneurs Look Ahead — from forbes.com by Jodie Cook

AI agents take charge
Jérémy Grandillon, CEO of TC9 – AI Allbound Agency, said “Today, AI can do a lot, but we don’t trust it to take actions on our behalf. This will change in 2025. Be ready to ask your AI assistant to book a Uber ride for you.” Start small with one agent handling one task. Build up to an army.

“If 2024 was agents everywhere, then 2025 will be about bringing those agents together in networks and systems,” said Nicholas Holland, vice president of AI at Hubspot. “Micro agents working together to accomplish larger bodies of work, and marketplaces where humans can ‘hire’ agents to work alongside them in hybrid teams. Before long, we’ll be saying, ‘there’s an agent for that.'”

Voice becomes default
Stop typing and start talking. Adam Biddlecombe, head of brand at Mindstream, predicts a shift in how we interact with AI. “2025 will be the year that people start talking with AI,” he said. “The majority of people interact with ChatGPT and other tools in the text format, and a lot of emphasis is put on prompting skills.

Biddlecombe believes, “With Apple’s ChatGPT integration for Siri, millions of people will start talking to ChatGPT. This will make AI so much more accessible and people will start to use it for very simple queries.”

Get ready for the next wave of advancements in AI. AGI arrives early, AI agents take charge, and voice becomes the norm. Video creation gets easy, AI embeds everywhere, and one-person billion-dollar companies emerge.



These 4 graphs show where AI is already impacting jobs — from fastcompany.com by Brandon Tucker
With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce. 

To better understand the types of roles that AI is impacting, ZoomInfo’s research team looked to its proprietary database of professional contacts for answers. The platform, which detects more than 1.5 million personnel changes per day, revealed a dramatic increase in AI-related job titles since 2022. With a 200% increase in two years, the data paints a vivid picture of how AI technology is reshaping the workforce.

Why does this shift in AI titles matter for every industry?

 

AI educators are coming to this school – and it’s part of a trend — from techradar.com by Eric Hal Schwartz
Two hours of lessons, zero teachers

  • An Arizona charter school will use AI instead of human teachers for two hours a day on academic lessons.
  • The AI will customize lessons in real-time to match each student’s needs.
  • The company has only tested this idea at private schools before but claims it hugely increases student academic success.

One school in Arizona is trying out a new educational model built around AI and a two-hour school day. When Arizona’s Unbound Academy opens, the only teachers will be artificial intelligence algorithms in a perfect utopia or dystopia, depending on your point of view.


AI in Instructional Design: reflections on 2024 & predictions for 2025 — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, four new year’s resolutions for the AI-savvy instructional designer.


Debating About AI: A Free Comprehensive Guide to the Issues — from stefanbauschard.substack.com by Stefan Bauschard

In order to encourage and facilitate debate on key controversies related to AI, I put together this free 130+ page guide to the main arguments and ideas related to the controversies.


Universities need to step up their AGI game — from futureofbeinghuman.com by Andrew Maynard
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions

And because of this, I think there’s a unique opportunity for universities (research universities in particular) to up their game and play a leadership role in navigating the coming advanced AI transition.

Of course, there are already a number of respected university-based initiatives that are working on parts of the challenge. Stanford HAI (Human-centered Artificial Intelligence) is one that stands out, as does the Leverhulm Center for the Future of Intelligence at the University of Cambridge, and the Center for Governance of AI at the University of Oxford. But these and other initiatives are barely scratching the surface of what is needed to help successfully navigate advanced AI transitions.

If universities are to be leaders rather than bystanders in ensuring human flourishing in an age of AI, there’s an urgent need for bolder and more creative forward-looking initiatives that support research, teaching, thought leadership, and knowledge mobilization, at the intersection of advanced AI and all aspects of what it means to thrive and grow as a species.


 

 

How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein

Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.

Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.

Here are the five most popular stories that Education Week published in 2024 about AI in schools.


What’s next with AI in higher education? — from msn.com by Science X Staff

Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:

1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change


5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns

 
© 2025 | Daniel Christian