Google Workspace enables the future of AI-powered work for every business  — from workspace.google.com

The following AI capabilities will start rolling out to Google Workspace Business customers today and to Enterprise customers later this month:

  • Get AI assistance in Gmail, Docs, Sheets, Meet, Chat, Vids, and more: Do your best work faster with AI embedded in the tools you use every day. Gemini streamlines your communications by helping you summarize, draft, and find information in your emails, chats, and files. It can be a thought partner and source of inspiration, helping you create professional documents, slides, spreadsheets, and videos from scratch. Gemini can even improve your meetings by taking notes, enhancing your audio and video, and catching you up on the conversation if you join late.
  • Chat with Gemini Advanced, Google’s next-gen AI: Kickstart learning, brainstorming, and planning with the Gemini app on your laptop or mobile device. Gemini Advanced can help you tackle complex projects including coding, research, and data analysis and lets you build Gems, your team of AI experts to help with repeatable or specialized tasks.
  • Unlock the power of NotebookLM PlusWe’re bringing the revolutionary AI research assistant to every employee, to help them make sense of complex topics. Upload sources to get instant insights and Audio Overviews, then share customized notebooks with the team to accelerate their learning and onboarding.

And per Evelyn from the Stay Ahead newsletter (at FlexOS)

Google’s Gemini AI is stepping up its game in Google Workspace, bringing powerful new capabilities to your favorite tools like Gmail, Docs, and Sheets:

  • AI-Powered Summaries: Get concise, AI-generated summaries of long emails and documents so you can focus on what matters most.
  • Smart Reply: Gemini now offers context-aware email replies that feel more natural and tailored to your style.
  • Slides and images generation: Gemini in Slides can help you generate new images, summarize your slides, write and rewrite content, and refer to existing Drive files and/or emails.
  • Automated Data Insights: In Google Sheets, Gemini helps create a task tracker, conference agenda, spot trends, suggest formulas, and even build charts with simple prompts.
  • Intelligent Drafting: Google Docs now gets a creativity boost, helping you draft reports, proposals, or blog posts with AI suggestions and outlines.
  • Meeting Assistance: Say goodbye to the awkward AI attendees to help you take notes, now Gemini can natively do that for you – no interruption, no avatar, and no extra attendee. Meet can now also automatically generate captions to lower the language barrier.

Eveyln (from FlexOS) also mentions that CoPilot is getting enhancements too:

Copilot is now included in Microsoft 365 Personal and Family — from microsoft.com

Per Evelyn:

It’s exactly what we predicted: stand-alone AI apps like note-takers and image generators have had their moment, but as the tech giants step in, they’re bringing these features directly into their ecosystems, making them harder to ignore.


Announcing The Stargate Project — from openai.com

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.


Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Adobe’s new AI tool can edit 10,000 images in one click — from theverge.com by  Jess Weatherbed
Firefly Bulk Create can automatically remove, replace, or extend image backgrounds in huge batches.

Adobe is launching new generative AI tools that can automate labor-intensive production tasks like editing large batches of images and translating video presentations. The most notable is “Firefly Bulk Create,” an app that allows users to quickly resize up to 10,000 images or replace all of their backgrounds in a single click instead of tediously editing each picture individually.

 

Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts

We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)

If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.


Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025

AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:

  1. Break your own AI policies before you implement them.
  2. Fund your failures.
  3. Resist the pilot program. …
  4. Host Anti-Tech Tech Talks
  5. …+ several more tips

While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.



Maria Anderson responded to Clay’s posting with this idea:

Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.



Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That’s where the new Grammarly feature comes in.

The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”


AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.

Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.


Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

10 Higher Ed Trends to Watch In 2025 — from insidetrack.org

While “polarization” was Merriam-Webster’s word of the year for 2024, we have some early frontrunners for 2025 — especially when it comes to higher education. Change. Agility. Uncertainty. Flexibility. As we take a deep dive into the trends on tap for higher education in the coming year, it’s important to note that, with an incoming administration who has vowed to shake things up, the current postsecondary system could be turned on its head. With that in mind, we wade into our yearly look at the topics and trends that will be making headlines — and making waves — in the year ahead.

#Highereducation #learningecosystems #change #trends #businessmodels #trends #onlinelearning #AI #DEI #skillsbasedlearning #skills #alternatives #LearningandEmploymentRecords #LERs #valueofhighereducation #GenAI

 

How Generative AI Is Shaping the Future of Law: Challenges and Trends in the Legal Profession — from thomsonreuters.com by Raghu Ramanathan

With this mind, Thomson Reuters and Lexpert hosted a panel featuring law firm leaders and industry experts discussing the challenges and trends around the use of generative AI in the legal profession.?Below are insights from an engaging and informative discussion.

Sections included:

  • Lawyers are excited to implement generative AI solutions
  • Unfounded concerns about robot lawyers
  • Changing billing practices and elevating services
  • Managing and mitigating risks

Adopting Legal Technology Responsibly — from lexology.com by Sacha Kirk

Here are fundamental principles to guide the process:

  1. Start with a Needs Assessment…
  2. Engage Stakeholders Early…
  3. Choose Scalable Solutions…
  4. Prioritise Security and Compliance…
  5. Plan for Change Management…

Modernizing Legal Workflows: The Role Of AI, Automation, And Strategic Partnerships — from abovethelaw.com by Scott Angelo, Jared Gullbergh, Nancy Griffing, and Michael Owen Hill
A roadmap for law firms.  

Angelo added, “We really doubled down on AI because it was just so new — not just to the legal industry, but to the world.” Under his leadership, Buchanan’s efforts to embrace AI have garnered significant attention, earning the firm recognition as one of the “Best of the Best for Generative AI” in the 2024 BTI “Leading Edge Law Firms” survey.

This acknowledgment reflects more than ambition; it highlights the firm’s ability to translate innovative ideas into actionable results. By focusing on collaboration and leveraging technology to address client demands, Buchanan has set a benchmark for what is possible in legal technology innovation.

The collective team followed these essential steps for app development:

  • Identify and Prioritize Use Cases…
  • Define App Requirements…
  • Leverage Pre-Built Studio Apps and Templates…
  • Incorporate AI and Automation…
  • Test and Iterate…
  • Deploy and Train…
  • Measure Success…

Navigating Generative AI in Legal Practice — from linkedin.com by Colin Levy

The rise of artificial intelligence (AI), particularly generative AI, has introduced transformative potential to legal practice. For in-house counsel, managing legal risk while driving operational efficiency increasingly involves navigating AI’s opportunities and challenges. While AI offers remarkable tools for automation and data-driven decision-making, it is essential to approach these tools as complementary to human judgment, not replacements. Effective AI adoption requires balancing its efficiencies with a commitment to ethical, nuanced legal practice.

Here a few ways in which this arises:

 

Tech Trends 2025 — from deloitte.com by Deloitte Insights
In Deloitte’s 16th annual Tech Trends report, AI is the common thread of nearly every trend. Moving forward, it will be part of the substructure of everything we do.

We propose that the future of technology isn’t so much about more AI as it is about ubiquitous AI. We expect that, going forward, AI will become so fundamentally woven into the fabric of our lives that it’s everywhere, and so foundational that we stop noticing it.

AI will eventually follow a similar path, becoming so ubiquitous that it will be a part of the unseen substructure of everything we do, and we eventually won’t even know it’s there. It will quietly hum along in the background, optimizing traffic in our cities, personalizing our health care, and creating adaptative and accessible learning paths in education. We won’t “use” AI. We’ll just experience a world where things work smarter, faster, and more intuitively—like magic, but grounded in algorithms. We expect that it will provide a foundation for business and personal growth while also adapting and sustaining itself over time.

Nowhere is this AI-infused future more evident than in this year’s Tech Trends report, which each year explores emerging trends across the six macro forces of information technology (figure 1). Half of the trends that we’ve chronicled are elevating forces—interaction, information, and computation—that underpin innovation and growth. The other half—the grounding forces of the business of technology, cyber and trust, and core modernization—help enterprises seamlessly operate while they grow.

 

The State of Flexible Work: Statistics from The Flex Index — from flexindex.com

Flex Report Q4 2024
Hybrid and Remote Work by the Numbers

And you thought return to office policy was settled! For a while, it looked like 2-3 days per week in the office would be the future of work in America.

Yet this quarter has brought significant changes to the landscape. Major companies like Amazon, Dell, and The Washington Post announced their plans for a full return to office. Then came a shift in the political atmosphere, with Trump’s victory and potential incoming changes requiring full-time office work for government employees.

These developments raise important questions about where workplace flexibility is headed. Are we witnessing the beginning of a broader shift back to Full Time In Office? Is the era of fully flexible work coming to an end? Or is this simply another evolution in how companies structure their workplace policies?

In this report, we dig into US-wide trends to see if the high-profile shifts toward Full Time In Office reflect broader market movement or just isolated cases. We examine how different industries are approaching flexibility, from Technology’s continued embrace to the challenges faced by sectors dependent on physical presence. Plus, we explore the divide in how companies of different sizes approach workplace flexibility. Are we truly heading back to the office full time, or is the future of work more nuanced than the headlines suggest?

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

From DSC:
I opened up a BRAND NEW box of cereal from Post the other day. As I looked down into the package, I realized that it was roughly half full. (This has happened many times before, but it struck me so much this time that I had to take pictures of it and post this item.)
.

 

.
Looks can be deceiving for sure. It looks like I should have been getting a full box of cereal…but no…only about half of the package was full. It’s another example of the shrinkflation of things — which can also be described as people deceptively ripping other people off. 

“As long as I’m earning $$, I don’t care how it impacts others.” <– That’s not me talking, but it’s increasingly the perspective that many Americans have these days. We don’t bother with ethics and morals…how old-fashioned can you get, right? We just want to make as much money as possible and to hell with how our actions/products are impacting others.

Another example from the food industry is one of the companies that I worked for in the 1990’s — Kraft Foods. Kraft has not served peoples’ health well at all. Even when they tried to take noble steps to provide healthier foods, other food executives/companies in the industry wouldn’t hop on board. They just wanted to please Wall Street, not Main Street. So companies like Kraft have contributed to the current situations that we face which involve obesity, diabetes, heart attacks, and other ailments. (Not to mention increased health care costs.) 

The gambling industry doesn’t give a rip about people either. Look out for the consequences.

And the cannabis industry joins the gambling industry...and they’re often right on the doorsteps of universities and colleges.

Bottom line reflection:
There are REAL ramifications when we don’t take Christ’s words/commands to love one another seriously (or even to care about someone at all). We’re experiencing such ramifications EVERY DAY now.

 

How AI is transforming learning for dyslexic students — from eschoolnews.com by Samay Bhojwani, University of Nebraska–Lincoln
As schools continue to adopt AI-driven tools, educators can close the accessibility gap and help dyslexic students thrive

Many traditional methods lack customization and don’t empower students to fully engage with content on their terms. Every dyslexic student experiences challenges differently, so a more personalized approach is essential for fostering comprehension, engagement, and academic growth.

Artificial intelligence is increasingly recognized for its potential to transform educational accessibility. By analyzing individual learning patterns, AI-powered tools can tailor content to meet each student’s specific needs. For dyslexic students, this can mean summarizing complex texts, providing auditory support, or even visually structuring information in ways that aid comprehension.


NotebookLM How-to Guide 2024 — from ai-supremacy.com by Michael Spencer and Alex McFarland
With Audio Version | A popular guide reloaded.

In this guide, I’ll show you:

  1. How to use the new advanced audio customization features
  2. Two specific workflows for synthesizing information (research papers and YouTube videos)
  3. Pro tips for maximizing results with any type of content
  4. Common pitfalls to avoid (learned these the hard way)

The State of Instructional Design 2024: A Field on the Brink of Disruption? — from drphilippahardman.substack.com by Dr. Philippa Hardman
My hot takes from a global survey I ran with Synthesia

As I mentioned on LinkedIn, earlier this week Synthesia published the results of a global survey that we ran together the state of instructional design in 2024.


Boundless Socratic Learning: Google DeepMind’s Vision for AI That Learns Without Limits — from by Giorgio Fazio

Google DeepMind researchers have unveiled a groundbreaking framework called Boundless Socratic Learning (BSL), a paradigm shift in artificial intelligence aimed at enabling systems to self-improve through structured language-based interactions. This approach could mark a pivotal step toward the elusive goal of artificial superintelligence (ASI), where AI systems drive their own development with minimal human input.

The promise of Boundless Socratic Learning lies in its ability to catalyze a shift from human-supervised AI to systems that evolve and improve autonomously. While significant challenges remain, the introduction of this framework represents a step toward the long-term goal of open-ended intelligence, where AI is not just a tool but a partner in discovery.


5 courses to take when starting out a career in Agentic AI — from techloy.com by David Adubiina
This will help you join the early train of experts who are using AI agents to solve real world problems.

This surge in demand is creating new opportunities for professionals equipped with the right skills. If you’re considering a career in this innovative field, the following five courses will provide a solid foundation when starting a career in Agentic AI.



 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 

Skill-Based Training: Embrace the Benefits; Stay Wary of the Hype — from learningguild.com by Paige Yousey

1. Direct job relevance
One of the biggest draws of skill-based training is its direct relevance to employees’ daily roles. By focusing on teaching job-specific skills, this approach helps workers feel immediately empowered to apply what they learn, leading to a quick payoff for both the individual and the organization. Yet, while this tight focus is a major benefit, it’s important to consider some potential drawbacks that could arise from an overly narrow approach.

Be wary of:

  • Overly Narrow Focus: Highly specialized training might leave employees with little room to apply their skills to broader challenges, limiting versatility and growth potential.
  • Risk of Obsolescence: Skills can quickly become outdated, especially in fast-evolving industries. L&D leaders should aim for regular updates to maintain relevance.
  • Neglect of Soft Skills: While technical skills are crucial, ignoring soft skills like communication and problem-solving may lead to a lack of balanced competency.

2. Enhanced job performance…
3. Addresses skill gaps…

…and several more areas to consider


Another item from Paige Yousey

5 Key EdTech Innovations to Watch — from learningguild.com by Paige Yousey

AI-driven course design

Strengths

  • Content creation and updates: AI streamlines the creation of training materials by identifying resource gaps and generating tailored content, while also refreshing existing materials based on industry trends and employee feedback to maintain relevance.
  • Data-driven insights: Use AI tools to provide valuable analytics to inform course development and instructional strategies, helping learner designers identify effective practices and improve overall learning outcomes.
  • Efficiency: Automating repetitive tasks, such as learner assessments and administrative duties, enables L&D professionals to concentrate on developing impactful training programs and fostering learner engagement.

Concerns

  • Limited understanding of context: AI may struggle to understand the specific educational context or the unique needs of diverse learner populations, potentially hindering effectiveness.
  • Oversimplification of learning: AI may reduce complex educational concepts to simple metrics or algorithms, oversimplifying the learning process and neglecting deeper cognitive development.
  • Resistance to change: Learning leaders may face resistance from staff who are skeptical about integrating AI into their training practices.

Also from the Learning Guild, see:

Use Twine to Easily Create Engaging, Immersive Scenario-Based Learning — from learningguild.com by Bill Brandon

Scenario-based learning immerses learners in realistic scenarios that mimic real-world challenges they might face in their roles. These learning experiences are highly relevant and relatable. SBL is active learning. Instead of passively consuming information, learners actively engage with the content by making decisions and solving problems within the scenario. This approach enhances critical thinking and decision-making skills.

SBL can be more effective when storytelling techniques create a narrative that guides learners through the scenario to maintain engagement and make the learning memorable. Learners receive immediate feedback on their decisions and learn from their mistakes. Reflection can deepen their understanding. Branching scenarios add simulated complex decision-making processes and show the outcome of various actions through interactive scenarios where learner choices lead to different outcomes.

Embrace the Future: Why L&D Leaders Should Prioritize AI Digital Literacy — from learningguild.com by Dr. Erica McCaig

The role of L&D leaders in AI digital literacy
For L&D leaders, developing AI digital literacy within an organization requires a well-structured curriculum and development plan that equips employees with the knowledge, skills, and ethical grounding needed to thrive in an AI-augmented workplace. This curriculum should encompass a range of competencies that enhance technical understanding and foster a mindset ready for innovation and responsible use of AI. Key areas to focus on include:

  • Understanding AI Fundamentals: …
  • Proficiency with AI Tools: …
  • Ethical Considerations: …
  • Cultivating Critical Thinking: …
 

7 Legal Tech Trends To Watch In 2025 — from lexology.com by Sacha Kirk
Australia, United Kingdom November 25 2024

In-house legal teams are changing from a traditional support function to becoming proactive business enablers. New tools are helping legal departments enhance efficiency, improve compliance, and to deliver greater strategic value.

Here’s a look at seven emerging trends that will shape legal tech in 2025 and insights on how in-house teams can capitalise on these innovations.

1. AI Solutions…
2. Regulatory Intelligence Platforms…

7. Self-Service Legal Tools and Knowledge Management
As the demand on in-house legal teams continues to grow, self-service tools are becoming indispensable for managing routine legal tasks. In 2025, these tools are expected to evolve further, enabling employees across the organisation to handle straightforward legal processes independently. Whether it’s accessing pre-approved templates, completing standard agreements, or finding answers to common legal queries, self-service platforms reduce the dependency on legal teams for everyday tasks.

Advanced self-service tools go beyond templates, incorporating intuitive workflows, approval pathways, and built-in guidance to ensure compliance with legal and organisational policies. By empowering business users to manage low-risk matters on their own, these tools free up legal teams to focus on complex and high-value work.


 

 

Trade School Enrollment Surges Post-Pandemic, Outpacing Traditional Universities — from businesswire.com
New Report Highlights Growth in Healthcare and Culinary Arts Programs

CHICAGO–(BUSINESS WIRE)–A new report released today by Validated Insights, a higher education marketing firm, reveals a significant increase in trade school enrollment following the pandemic, with a 4.9% growth from 2020 to 2023. This surge contrasts sharply with a 0.6% decline in university enrollment during the same period, highlighting a growing preference for career-focused education.

The report highlights the diverse landscape of trade schools, with varying enrollment trends across different categories and subtypes. While some sectors face challenges, others, like Culinary Arts and Beauty and Wellness, present significant growth opportunities and shifting student attitudes.


A trend colleges might not want applicants to notice: It’s becoming easier to get in — from hechingerreport.orgby Jon Marcus
Despite public perception, and for the first time in decades, acceptance rates are going up

As enrollment in colleges and universities continues to decline — down by more than 2 million students, or 10 percent, in the 10 years ending 2022 — they’re not only casting wider nets. Something else dramatic is happening to the college application process, for the first time in decades:

It’s becoming easier to get in.

Colleges and universities, on average, are admitting a larger proportion of their applicants than they did 20 years ago, new research by the conservative think tank the American Enterprise Institute finds.


 
© 2025 | Daniel Christian