You can now use Deep Research without $200 — from flexos.work


Accelerating scientific breakthroughs with an AI co-scientist — from research.google by Juraj Gottweis and Vivek Natarajan

We introduce AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate the clock speed of scientific and biomedical discoveries.


Now decides next: Generating a new future — from Deloitte.com
Deloitte’s State of Generative AI in the Enterprise Quarter four report

There is a speed limit. GenAI technology continues to advance at incredible speed. However, most organizations are moving at the speed of organizations, not at the speed of technology. No matter how quickly the technology advances—or how hard the companies producing GenAI technology push—organizational change in an enterprise can only happen so fast.

Barriers are evolving. Significant barriers to scaling and value creation are still widespread across key areas. And, over the past year regulatory uncertainty and risk management have risen in organizations’ lists of concerns to address. Also, levels of trust in GenAI are still moderate for the majority of organizations. Even so, with increased customization and accuracy of models—combined with a focus on better governance— adoption of GenAI is becoming more established.

Some uses are outpacing others. Application of GenAI is further along in some business areas than in others in terms of integration, return on investment (ROI) and expectations. The IT function is most mature; cybersecurity, operations, marketing and customer service are also showing strong adoption and results. Organizations reporting higher ROI for their most scaled initiatives are broadly further along in their GenAI journeys.

 

AI in K12: Today’s Breakthroughs and Tomorrow’s Possibilities (webinar)
How AI is Transforming Classrooms Today and What’s Next


Audio-Based Learning 4.0 — from drphilippahardman.substack.com by Dr. Philippa Hardman
A new & powerful way to leverage AI for learning?

At the end of all of this my reflection is that the research paints a pretty exciting picture – audio-based learning isn’t just effective, it’s got some unique superpowers when it comes to boosting comprehension, ramping up engagement, and delivering feedback that really connects with learners.

While audio has been massively under-used as a mode of learning, especially compared to video and text, we’re at an interesting turning point where AI tools are making it easier than ever to tap into audio’s potential as a pedagogical tool.

What’s super interesting is how the solid research backing audio’s effectiveness is and how well this is converging with these new AI capabilities.

From DSC:
I’ve noticed that I don’t learn as well via audio-only based events. It can help if visuals are also provided, but I have to watch the cognitive loads. My processing can start to get overloaded — to the point that I have to close my eyes and just listen sometimes. But there are people I know who love to listen to audiobooks and prefer to learn that way. They can devour content and process/remember it all. Audio is a nice change of pace at times, but I prefer visuals and reading often times. It needs to be absolutely quiet if I’m tackling some new information/learning. 


In Conversation With… Ashton Cousineau — from drphilippahardman.substack.com by Dr. Philippa Hardman
A new video series exploring how L&D professionals are working with AI on the ground

In Conversation With… Ashton Cousineau by Dr Philippa Hardman

A new video series exploring how L&D professionals are working with AI on the ground

Read on Substack


The Learning Research Digest vol. 28 — from learningsciencedigest.substack.com by Dr. Philippa Hardman

Hot Off the Research Press This Month:

  • AI-Infused Learning Design – A structured approach to AI-enhanced assignments using a three-step model for AI integration.
  • Mathematical Dance and Creativity in STEAM – Using AI-powered motion capture to translate dance movements into mathematical models.
  • AI-Generated Instructional Videos – How adaptive AI-powered video learning enhances problem-solving and knowledge retention.
  • Immersive Language Learning with XR & AI – A new framework for integrating AI-driven conversational agents with Extended Reality (XR) for task-based language learning.
  • Decision-Making in Learning Design – A scoping review on how instructional designers navigate complex instructional choices and make data-driven decisions.
  • Interactive E-Books and Engagement – Examining the impact of interactive digital books on student motivation, comprehension, and cognitive engagement.
  • Elevating Practitioner Voices in Instructional Design – A new initiative to amplify instructional designers’ contributions to research and innovation.

Deep Reasoning, Agentic AI & the Continued Rise of Specialised AI Research & Tools for Education — from learningfuturesdigest.substack.com by Dr. Philippa Hardman

Here’s a quick teaser of key developments in the world of AI & learning this month:

  • DeepSeek R-1, OpenAI’s Deep Seek & Perplexity’s ‘Deep Research’ are the latest additions to a growing number of “reasoning models” with interesting implications for evidence-based learning design & development.
  • The U.S. Education Dept release an AI Toolkit and a fresh policy roadmap enabling the adoption of AI use in schools.
  • Anthropic Release “Agentic Claude”, another AI agent that clicks, scrolls, and can even successfully complete e-learning courses…
  • Oxford University Announce the AIEOU Hub, a research-backed research lab to support research and implementation on AI in education.
  • “AI Agents Everywhere”: A Forbes peek at how agentic AI will handle the “boring bits” of classroom life.
  • [Bias klaxon!] Epiphany AI: My own research leads to the creation of a specialised, “pedagogy first” AI co-pilot for instructional design marking the continued growth of specialised AI tools designed for specific industries and workflows.

AI is the Perfect Teaching Assistant for Any Educator — from unite.ai by Navi Azaria, CPO at Kaltura

Through my work with leading educational institutions at Kaltura, I’ve seen firsthand how AI agents are rapidly becoming indispensable. These agents alleviate the mounting burdens on educators and provide new generations of tech-savvy students with accessible, personalized learning, giving teachers the support they need to give their students the personalized attention and engagement they deserve.


Learning HQ — from ai-disruptor-hq.notion.site

This HQ includes all of my AI guides, organized by tool/platform. This list is updated each time a new one is released, and outdated guides are removed/replaced over time.



How AI Is Reshaping Teachers’ Jobs — from edweek.org

Artificial intelligence is poised to fundamentally change the job of teaching. AI-powered tools can shave hours off the amount of time teachers spend grading, lesson-planning, and creating materials. AI can also enrich the lessons they deliver in the classroom and help them meet the varied needs of all students. And it can even help bolster teachers’ own professional growth and development.

Despite all the promise of AI, though, experts still urge caution as the technology continues to evolve. Ethical questions and practical concerns are bubbling to the surface, and not all teachers feel prepared to effectively and safely use AI.

In this special report, see how early-adopter teachers are using AI tools to transform their daily work, tackle some of the roadblocks to expanded use of the technology, and understand what’s on the horizon for the teaching profession in the age of artificial intelligence.

 

The Anthropic Economic Index — from anthropic.com; via George Siemens

In the coming years, AI systems will have a major impact on the ways people work. For that reason, we’re launching the Anthropic Economic Index, an initiative aimed at understanding AI’s effects on labor markets and the economy over time.

The Index’s initial report provides first-of-its-kind data and analysis based on millions of anonymized conversations on Claude.ai, revealing the clearest picture yet of how AI is being incorporated into real-world tasks across the modern economy.

We’re also open sourcing the dataset used for this analysis, so researchers can build on and extend our findings.

 

Half A Million Students Given ChatGPT As CSU System Makes AI History — from forbes.com by Dan Fitzpatrick

The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date.

The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally.

We’re still in the early stages of AI adoption in education, and it is critical that the entire ecosystem—education systems, technologists, educators, and governments—work together to ensure that all students globally have access to AI and develop the skills to use it responsibly

Leah Belsky, VP and general manager of education at OpenAI.




HOW educators can use GenAI – where to start and how to progress — from aliciabankhofer.substack.com by Alicia Bankhofer
Part of 3 of my series: Teaching and Learning in the AI Age

As you read through these use cases, you’ll notice that each one addresses multiple tasks from our list above.

1. Researching a topic for a lesson
2. Creating Tasks For Practice
3. Creating Sample Answers
4. Generating Ideas
5. Designing Lesson Plans
6. Creating Tests
7. Using AI in Virtual Classrooms
8. Creating Images
9. Creating worksheets
10. Correcting and Feedback


 

Also see:

Introducing deep research — from openai.com
An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next.

[On 2/2/25 we launched] deep research in ChatGPT, a new agentic capability that conducts multi-step research on the internet for complex tasks. It accomplishes in tens of minutes what would take a human many hours.

Deep research is OpenAI’s next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst.

Comments/information per The Rundown AI:
The Rundown: OpenAI just launchedDeep Research, a new ChatGPT feature that conducts extensive web research on complex topics and delivers detailed reports with citations in under 30 minutes.

The details:

  • The system uses a specialized version of o3 to analyze text, images, and PDFs across multiple sources, producing comprehensive research summaries.
  • Initial access is limited to Pro subscribers ($200/mo) with 100 queries/month, but if safety metrics remain stable, it will expand to Plus and Team users within weeks.
  • Research tasks take between 5-30 minutes to complete, with users receiving a list of clarifying questions to start and notifications when results are ready.
  • Deep Research achieved a 26.6% on Humanity’s Last Exam, significantly outperforming other AI models like Gemini Thinking (6.2%) and GPT-4o (3.3%).

Why it matters: ChatGPT excels at quick, instant answers, but Deep Research represents the first major consumer attempt at tackling complex tasks that take humans days. Combined with the release of Operator, the landscape is shifting towards longer thinking with autonomous actions — and better results to show for it.

Also see:

The End of Search, The Beginning of OpenAI’s Deep Research — from theaivalley.com by Barsee

The quality of citations are also genuinely advance. Unlike traditional AI-generated sources prone to hallucinations, Deep Research provides legitimate academic references. Clicking a citation often takes users directly to the relevant highlighted text.

In a demo, the agent generated a comprehensive report on iOS and Android app market trends, showcasing its ability to tackle intricate subjects with accuracy.


Top 13 AI insights — from theneurondaily.com

Which links to and discusses Andrej Karpathy’s video at:

.

.

This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their “psychology”, and how to get the best use them in practical applications. I have one “Intro to LLMs” video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version.

 

Five things to know before you launch a research podcast — from timeshighereducation.com by David Allan  and Andrew Murray
Starting a podcast can open up your research to a new audience. David Allan and Andrew Murray show how

Launching a podcast isn’t necessarily difficult. Sustaining it, on the other hand, is difficult. You’re entering a crowded market – it’s estimated that there are more than 4 million of them – and audience share is far from equal. An alarmingly high number fail to make it past their third episode before being scrapped, and the vast majority put out fewer than 20 episodes.

Despite these challenges, podcasts can be an astonishingly effective tool to promote research or academic knowledge. If you avoid the many pitfalls, you have a communication tool with full control of the message; a tool that exists in perpetuity, drawing attention to the work that you do.

Here, a highly experienced podcast producer and associate lecturer at the University of the West of Scotland and an award-winning former broadcast journalist draw on their experiences to share advice on how to successfully launch a research podcast.


Also from timeshighereducation.com:

An introvert’s guide to networking — from by Yalinu Poya
For academics, networking can greatly enhance your career. But if the very idea fills you with dread, Yalinu Poya offers her advice for putting yourself out there

In academia, meeting the right person can lead to a research collaboration, or it could lead to your work being shared with someone who can use it to make a difference. It could lead to public speaking opportunities or even mentorship. It all goes towards your long-term success.

For some of us, the idea of putting yourself out there in that way – of making an active effort to meet new people – is terrifying.

 

Your AI Writing Partner: The 30-Day Book Framework — from aidisruptor.ai by Alex McFarland and Kamil Banc
How to Turn Your “Someday” Manuscript into a “Shipped” Project Using AI-Powered Prompts

With that out of the way, I prefer Claude.ai for writing. For larger projects like a book, create a Claude Project to keep all context in one place.

  • Copy [the following] prompts into a document
  • Use them in sequence as you write
  • Adjust the word counts and specifics as needed
  • Keep your responses for reference
  • Use the same prompt template for similar sections to maintain consistency

Each prompt builds on the previous one, creating a systematic approach to helping you write your book.


Using NotebookLM to Boost College Reading Comprehension — from michellekassorla.substack.com by Michelle Kassorla and Eugenia Novokshanova
This semester, we are using NotebookLM to help our students comprehend and engage with scholarly texts

We were looking hard for a new tool when Google released NotebookLM. Not only does Google allow unfettered use of this amazing tool, it is also a much better tool for the work we require in our courses. So, this semester, we have scrapped our “old” tools and added NotebookLM as the primary tool for our English Composition II courses (and we hope, fervently, that Google won’t decide to severely limit its free tier before this semester ends!)

If you know next-to-nothing about NotebookLM, that’s OK. What follows is the specific lesson we present to our students. We hope this will help you understand all you need to know about NotebookLM, and how to successfully integrate the tool into your own teaching this semester.


Leadership & Generative AI: Hard-Earned Lessons That Matter — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable Advice for Higher Education Leaders in 2025

AFTER two years of working closely with leadership in multiple institutions, and delivering countless workshops, I’ve seen one thing repeatedly: the biggest challenge isn’t the technology itself, but how we lead through it. Here is some of my best advice to help you navigate generative AI with clarity and confidence:

  1. Break your own AI policies before you implement them.
  2. Fund your failures.
  3. Resist the pilot program. …
  4. Host Anti-Tech Tech Talks
  5. …+ several more tips

While generative AI in higher education obviously involves new technology, it’s much more about adopting a curious and human-centric approach in your institution and communities. It’s about empowering learners in new, human-oriented and innovative ways. It is, in a nutshell, about people adapting to new ways of doing things.



Maria Anderson responded to Clay’s posting with this idea:

Here’s an idea: […] the teacher can use the [most advanced] AI tool to generate a complete solution to “the problem” — whatever that is — and demonstrate how to do that in class. Give all the students access to the document with the results.

And then grade the students on a comprehensive followup activity / presentation of executing that solution (no notes, no more than 10 words on a slide). So the students all have access to the same deep AI result, but have to show they comprehend and can iterate on that result.



Grammarly just made it easier to prove the sources of your text in Google Docs — from zdnet.com by Jack Wallen
If you want to be diligent about proving your sources within Google Documents, Grammarly has a new feature you’ll want to use.

In this age of distrust, misinformation, and skepticism, you may wonder how to demonstrate your sources within a Google Document. Did you type it yourself, copy and paste it from a browser-based source, copy and paste it from an unknown source, or did it come from generative AI?

You may not think this is an important clarification, but if writing is a critical part of your livelihood or life, you will definitely want to demonstrate your sources.

That’s where the new Grammarly feature comes in.

The new feature is called Authorship, and according to Grammarly, “Grammarly Authorship is a set of features that helps users demonstrate their sources of text in a Google doc. When you activate Authorship within Google Docs, it proactively tracks the writing process as you write.”


AI Agents Are Coming to Higher Education — from govtech.com
AI agents are customizable tools with more decision-making power than chatbots. They have the potential to automate more tasks, and some schools have implemented them for administrative and educational purposes.

Custom GPTs are on the rise in education. Google’s version, Gemini Gems, includes a premade version called Learning Coach, and Microsoft announced last week a new agent addition to Copilot featuring use cases at educational institutions.


Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy — from er.educause.edu by Vicki Strunk and James Willis
Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

 

Students Pushback on AI Bans, India Takes a Leading Role in AI & Education & Growing Calls for Teacher Training in AI — from learningfuturesdigest.substack.com by Dr. Philippa Hardman
Key developments in the world of AI & Education at the turn of 2025

At the end of 2024 and start of 2025, we’ve witnessed some fascinating developments in the world of AI and education, from from India’s emergence as a leader in AI education and Nvidia’s plans to build an AI school in Indonesia to Stanford’s Tutor CoPilot improving outcomes for underserved students.

Other highlights include Carnegie Learning partnering with AI for Education to train K-12 teachers, early adopters of AI sharing lessons about implementation challenges, and AI super users reshaping workplace practices through enhanced productivity and creativity.

Also mentioned by Philippa:


ElevenLabs AI Voice Tool Review for Educators — from aiforeducation.io with Amanda Bickerstaff and Mandy DePriest

AI for Education reviewed the ElevenLabs AI Voice Tool through an educator lens, digging into the new autonomous voice agent functionality that facilitates interactive user engagement. We showcase the creation of a customized vocabulary bot, which defines words at a 9th-grade level and includes options for uploading supplementary material. The demo includes real-time testing of the bot’s capabilities in defining terms and quizzing users.

The discussion also explored the AI tool’s potential for aiding language learners and neurodivergent individuals, and Mandy presented a phone conversation coach bot to help her 13-year-old son, highlighting the tool’s ability to provide patient, repetitive practice opportunities.

While acknowledging the technology’s potential, particularly in accessibility and language learning, we also want to emphasize the importance of supervised use and privacy considerations. Right now the tool is currently free, this likely won’t always remain the case, so we encourage everyone to explore and test it out now as it continues to develop.


How to Use Google’s Deep Research, Learn About and NotebookLM Together — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
Supercharging your research with Google Deepmind’s new AI Tools.

Why Combine Them?
Faster Onboarding: Start broad with Deep Research, then refine and clarify concepts through Learn About. Finally, use NotebookLM to synthesize everything into a cohesive understanding.

Deeper Clarity: Unsure about a concept uncovered by Deep Research? Head to Learn About for a primer. Want to revisit key points later? Store them in NotebookLM and generate quick summaries on demand.

Adaptive Exploration: Create a feedback loop. Let new terms or angles from Learn About guide more targeted Deep Research queries. Then, compile all findings in NotebookLM for future reference.
.


Getting to an AI Policy Part 1: Challenges — from aiedusimplified.substack.com by Lance Eaton, PH.D.
Why institutional policies are slow to emerge in higher education

There are several challenges to making policy that make institutions hesitant to or delay their ability to produce it. Policy (as opposed to guidance) is much more likely to include a mixture of IT, HR, and legal services. This means each of those entities has to wrap their heads around GenAI—not just for their areas but for the other relevant areas such as teaching & learning, research, and student support. This process can definitely extend the time it takes to figure out the right policy.

That’s naturally true with every policy. It does not often come fast enough and is often more reactive than proactive.

Still, in my conversations and observations, the delay derives from three additional intersecting elements that feel like they all need to be in lockstep in order to actually take advantage of whatever possibilities GenAI has to offer.

  1. Which Tool(s) To Use
  2. Training, Support, & Guidance, Oh My!
  3. Strategy: Setting a Direction…

Prophecies of the Flood — from oneusefulthing.org by Ethan Mollick
What to make of the statements of the AI labs?

What concerns me most isn’t whether the labs are right about this timeline – it’s that we’re not adequately preparing for what even current levels of AI can do, let alone the chance that they might be correct. While AI researchers are focused on alignment, ensuring AI systems act ethically and responsibly, far fewer voices are trying to envision and articulate what a world awash in artificial intelligence might actually look like. This isn’t just about the technology itself; it’s about how we choose to shape and deploy it. These aren’t questions that AI developers alone can or should answer. They’re questions that demand attention from organizational leaders who will need to navigate this transition, from employees whose work lives may transform, and from stakeholders whose futures may depend on these decisions. The flood of intelligence that may be coming isn’t inherently good or bad – but how we prepare for it, how we adapt to it, and most importantly, how we choose to use it, will determine whether it becomes a force for progress or disruption. The time to start having these conversations isn’t after the water starts rising – it’s now.


 

The Best of AI 2024: Top Winners Across 9 Categories — from aiwithallie.beehiiv.com by Allie Miller
2025 will be our weirdest year in AI yet. Read this so you’re more prepared.


Top AI Tools of 2024 — from ai-supremacy.com by Michael Spencer (behind a paywall)
Which AI tools stood out for me in 2024? My list.

Memorable AI Tools of 2024
Catergories included:

  • Useful
  • Popular
  • Captures the zeighest of AI product innovation
  • Fun to try
  • Personally satisfying
  1. NotebookLM
  2. Perplexity
  3. Claude

New “best” AI tool? Really? — from theneurondaily.com by Noah and Grant
PLUS: A free workaround to the “best” new AI…

What is Google’s Deep Research tool, and is it really “the best” AI research tool out there?

Here’s how it works: Think of Deep Research as a research team that can simultaneously analyze 50+ websites, compile findings, and create comprehensive reports—complete with citations.

Unlike asking ChatGPT to research for you, Deep Research shows you its research plan before executing, letting you edit the approach to get exactly what you need.

It’s currently free for the first month (though it’ll eventually be $20/month) when bundled with Gemini Advanced. Then again, Perplexity is always free…just saying.

We couldn’t just take J-Cal’s word for it, so we rounded up some other takes:

Our take: We then compared Perplexity, ChatGPT Search, and Deep Research (which we’re calling DR, or “The Docta” for short) on robot capabilities from CES revealed:


An excerpt from today’s Morning Edition from Bloomberg

Global banks will cut as many as 200,000 jobs in the next three to five years—a net 3% of the workforce—as AI takes on more tasks, according to a Bloomberg Intelligence survey. Back, middle office and operations are most at risk. A reminder that Citi said last year that AI is likely to replace more jobs in banking than in any other sector. JPMorgan had a more optimistic view (from an employee perspective, at any rate), saying its AI rollout has augmented, not replaced, jobs so far.


 

 

AI educators are coming to this school – and it’s part of a trend — from techradar.com by Eric Hal Schwartz
Two hours of lessons, zero teachers

  • An Arizona charter school will use AI instead of human teachers for two hours a day on academic lessons.
  • The AI will customize lessons in real-time to match each student’s needs.
  • The company has only tested this idea at private schools before but claims it hugely increases student academic success.

One school in Arizona is trying out a new educational model built around AI and a two-hour school day. When Arizona’s Unbound Academy opens, the only teachers will be artificial intelligence algorithms in a perfect utopia or dystopia, depending on your point of view.


AI in Instructional Design: reflections on 2024 & predictions for 2025 — from drphilippahardman.substack.com by Dr. Philippa Hardman
Aka, four new year’s resolutions for the AI-savvy instructional designer.


Debating About AI: A Free Comprehensive Guide to the Issues — from stefanbauschard.substack.com by Stefan Bauschard

In order to encourage and facilitate debate on key controversies related to AI, I put together this free 130+ page guide to the main arguments and ideas related to the controversies.


Universities need to step up their AGI game — from futureofbeinghuman.com by Andrew Maynard
As Sam Altman and others push toward a future where AI changes everything, universities need to decide if they’re going to be leaders or bystanders in helping society navigate advanced AI transitions

And because of this, I think there’s a unique opportunity for universities (research universities in particular) to up their game and play a leadership role in navigating the coming advanced AI transition.

Of course, there are already a number of respected university-based initiatives that are working on parts of the challenge. Stanford HAI (Human-centered Artificial Intelligence) is one that stands out, as does the Leverhulm Center for the Future of Intelligence at the University of Cambridge, and the Center for Governance of AI at the University of Oxford. But these and other initiatives are barely scratching the surface of what is needed to help successfully navigate advanced AI transitions.

If universities are to be leaders rather than bystanders in ensuring human flourishing in an age of AI, there’s an urgent need for bolder and more creative forward-looking initiatives that support research, teaching, thought leadership, and knowledge mobilization, at the intersection of advanced AI and all aspects of what it means to thrive and grow as a species.


 

 

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 

Introducing Gemini 2.0: our new AI model for the agentic era — from blog.google by Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu

Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.

We’re getting 2.0 into the hands of developers and trusted testers today. And we’re working quickly to get it into our products, leading with Gemini and Search. Starting today our Gemini 2.0 Flash experimental model will be available to all Gemini users. We’re also launching a new feature called Deep Research, which uses advanced reasoning and long context capabilities to act as a research assistant, exploring complex topics and compiling reports on your behalf. It’s available in Gemini Advanced today.

Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.

.

Try Deep Research and our new experimental model in Gemini, your AI assistant — from blog.google by Dave Citron
Deep Research rolls out to Gemini Advanced subscribers today, saving you hours of time. Plus, you can now try out a chat optimized version of 2.0 Flash Experimental in Gemini on the web.

Today, we’re sharing the latest updates to Gemini, your AI assistant, including Deep Research — our new agentic feature in Gemini Advanced — and access to try Gemini 2.0 Flash, our latest experimental model.

Deep Research uses AI to explore complex topics on your behalf and provide you with findings in a comprehensive, easy-to-read report, and is a first look at how Gemini is getting even better at tackling complex tasks to save you time.1


Google Unveils A.I. Agent That Can Use Websites on Its Own — from nytimes.com by Cade Metz and Nico Grant (NOTE: This is a GIFTED article for/to you.)
The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user.

Google on Wednesday unveiled a prototype of this technology, which artificial intelligence researchers call an A.I. agent.

Google’s new prototype, called Mariner, is based on Gemini 2.0, which the company also unveiled on Wednesday. Gemini is the core technology that underpins many of the company’s A.I. products and research experiments. Versions of the system will power the company’s chatbot of the same name and A.I. Overviews, a Google search tool that directly answers user questions.


Gemini 2.0 is the next chapter for Google AI — from axios.com by Ina Fried

Google Gemini 2.0 — a major upgrade to the core workings of Google’s AI that the company launched Wednesday — is designed to help generative AI move from answering users’ questions to taking action on its own…

The big picture: Hassabis said building AI systems that can take action on their own has been DeepMind’s focus since its early days teaching computers to play games such as chess and Go.

  • “We were always working towards agent-based systems,” Hassabis said. “From the beginning, they were able to plan and then carry out actions and achieve objectives.”
  • Hassabis said AI systems that can act as semi-autonomous agents also represent an important intermediate step on the path toward artificial general intelligence (AGI) — AI that can match or surpass human capabilities.
  • “If we think about the path to AGI, then obviously you need a system that can reason, break down problems and carry out actions in the world,” he said.

AI Agents vs. AI Assistants: Know the Key Differences — from aithority.com by Rishika Patel

The same paradigm applies to AI systems. AI assistants function as reactive tools, completing tasks like answering queries or managing workflows upon request. Think of chatbots or scheduling tools. AI agents, however, work autonomously to achieve set objectives, making decisions and executing tasks dynamically, adapting as new information becomes available.

Together, AI assistants and agents can enhance productivity and innovation in business environments. While assistants handle routine tasks, agents can drive strategic initiatives and problem-solving. This powerful combination has the potential to elevate organizations, making processes more efficient and professionals more effective.


Discover how to accelerate AI transformation with NVIDIA and Microsoft — from ignite.microsoft.com

Meet NVIDIA – The Engine of AI. From gaming to data science, self-driving cars to climate change, we’re tackling the world’s greatest challenges and transforming everyday life. The Microsoft and NVIDIA partnership enables Startups, ISVs, and Partners global access to the latest NVIDIA GPUs on-demand and comprehensive developer solutions to build, deploy and scale AI-enabled products and services.


Google + Meta + Apple New AI — from theneurondaily.com by Grant Harve

What else Google announced:

  • Deep Research: New feature that can explore topics and compile reports.
  • Project Astra: AI agent that can use Google Search, Lens, and Maps, understands multiple languages, and has 10-minute conversation memory.
  • Project Mariner: A browser control agent that can complete web tasks (83.5% success rate on WebVoyager benchmark). Read more about Mariner here.
  • Agents to help you play (or test) video games.

AI Agents: Easier To Build, Harder To Get Right — from forbes.com by Andres Zunino

The swift progress of artificial intelligence (AI) has simplified the creation and deployment of AI agents with the help of new tools and platforms. However, deploying these systems beneath the surface comes with hidden challenges, particularly concerning ethics, fairness and the potential for bias.

The history of AI agents highlights the growing need for expertise to fully realize their benefits while effectively minimizing risks.

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 
 
© 2025 | Daniel Christian