2. Concern and excitement about AI — from pewresearch.org by Jacob Poushter,Moira Faganand Manolo Corichi

Key findings

  • A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned.
  • Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited.

Also relevant here:


AI Video Wars include Veo 3.1, Sora 2, Ray3, Kling 2.5 + Wan 2.5 — from heatherbcooper.substack.com by Heather Cooper
House of David Season 2 is here!

In today’s edition:

  • Veo 3.1 brings richer audio and object-level editing to Google Flow
  • Sora 2 is here with Cameo self-insertion and collaborative Remix features
  • Ray3 brings world-first reasoning and HDR to video generation
  • Kling 2.5 Turbo delivers faster, cheaper, more consistent results
  • WAN 2.5 revolutionizes talking head creation with perfect audio sync
  • House of David Season 2 Trailer
  • HeyGen Agent, Hailuo Agent, Topaz Astra, and Lovable Cloud updates
  • Image & Video Prompts

From DSC:
By the way, the House of David (which Heather referred to) is very well done! I enjoyed watching Season 1. Like The Chosen, it brings the Bible to life in excellent, impactful ways! Both series convey the context and cultural tensions at the time. Both series are an answer to prayer for me and many others — as they are professionally-done. Both series match anything that comes out of Hollywood in terms of the acting, script writing, music, the sets, etc.  Both are very well done.
.


An item re: Sora:


Other items re: Open AI’s new Atlas browser:

Introducing ChatGPT Atlas — from openai.com
The browser with ChatGPT built in.

[On 10/21/25] we’re introducing ChatGPT Atlas, a new web browser built with ChatGPT at its core.

AI gives us a rare moment to rethink what it means to use the web. Last year, we added search in ChatGPT so you could instantly find timely information from across the internet—and it quickly became one of our most-used features. But your browser is where all of your work, tools, and context come together. A browser built with ChatGPT takes us closer to a true super-assistant that understands your world and helps you achieve your goals.

With Atlas, ChatGPT can come with you anywhere across the web—helping you in the window right where you are, understanding what you’re trying to do, and completing tasks for you, all without copying and pasting or leaving the page. Your ChatGPT memory is built in, so conversations can draw on past chats and details to help you get new things done.

ChatGPT Atlas: the AI browser test — from getsuperintel.com by Kim “Chubby” Isenberg
Chat GPT Atlas aims to transform web browsing into a conversational, AI-native experience, but early reviews are mixed

OpenAI’s new ChatGPT Atlas promises to merge web browsing, search, and automation into a single interface — an “AI-native browser” meant to make the web conversational. After testing it myself, though, I’m still trying to see the real breakthrough. It feels familiar: summaries, follow-ups, and even the Agent’s task handling all mirror what I already do inside ChatGPT.

OpenAI’s new Atlas browser remembers everything — from theneurondaily.com by Grant Harvey
PLUS: Our AIs are getting brain rot?!

Here’s how it works: Atlas can see what you’re looking at on any webpage and instantly help without you needing to copy/paste or switch tabs. Researching hotels? Ask ChatGPT to compare prices right there. Reading a dense article? Get a summary on the spot. The AI lives in the browser itself.

OpenAI’s new product — from bensbites.com

The latest entry in AI browsers is Atlas – A new browser from OpenAI. Atlas would feel similar to Dia or Comet if you’ve used them. It has an “Ask ChatGPT” sidebar that has the context of your page, and choose “Agent” to work on that tab. Right now, Agent is limited to a single tab, and it is way too slow to delegate anything for real to it. Click accuracy for Agent is alright on normal web pages, but it will definitely trip up if you ask it to use something like Google Sheets.

One ambient feature that I think many people will like is “select to rewrite” – You can select any text in Atlas, hover/click on the blue dot in the top right corner to rewrite it using AI.


Your AI Resume Hacks Probably Won’t Fool Hiring Algorithms — from builtin.com by Jeff Rumage
Recruiters say those viral hidden prompt for resumes don’t work — and might cost you interviews.

Summary: Job seekers are using “prompt hacking” — embedding hidden AI commands in white font on resumes — to try to trick applicant tracking systems. While some report success, recruiters warn the tactic could backfire and eliminate the candidate from consideration.


The Job Market Might Be a Mess, But Don’t Blame AI Just Yet — from builtin.com by Matthew Urwin
A new study by Yale University and the Brookings Institution says the panic around artificial intelligence stealing jobs is overblown. But that might not be the case for long.

Summary: A Yale and Brookings study finds generative AI has had little impact on U.S. jobs so far, with tariffs, immigration policies and the number of college grads potentially playing a larger role. Still, AI could disrupt the workforce in the not-so-distant future.


 

International AI Safety Report — from internationalaisafetyreport.org

About the International AI Safety Report
The International AI Safety Report is the world’s first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. Written by over 100 independent experts and led by Turing Award winner Yoshua Bengio, it represents the largest international collaboration on AI safety research to date. The Report gives decision-makers a shared global picture of AI’s risks and impacts, serving as the authoritative reference for governments and organisations developing AI policies worldwide. It is already shaping debates and informing evidence-based decisions across research and policy communities.

 

3 Work Trends – Issue 87 — from the World Economic Forum

1. #AI adoption is delivering real results for early movers
Three years into the generative AI revolution, a small but growing group of global companies is demonstrating the tangible potential of AI. Among firms with revenues of $1 billion or more:

  • 17% report cost savings or revenue growth of at least 10% from AI.
  • Almost 80% say their AI investments have met or exceeded expectations.
  • Half worry they are not moving fast enough and could fall behind competitors.

The world’s first AI cabinet member — from therundown.ai by Zach Mink, Rowan Cheung, Shubham Sharma, Joey Liu & Jennifer Mossalgue
PLUS: Startup produces 3,000 AI podcast episodes weekly

The details:

  • Prime Minister Edi Rama unveiled Diella during a cabinet announcement this week, calling her the first member “virtually created by artificial intelligence”.
  • The AI avatar will evaluate and award all public tenders where the government contracts private firms.
  • Diella already serves citizens through Albania’s digital services portal, processing bureaucratic requests via voice commands.
  • Rama claims the AI will eliminate bribes and threats from decision-making, though the government hasn’t detailed what human oversight will exist.

The Rundown AI’s article links to:


Anthropic Economic Index report: Uneven geographic and enterprise AI adoption — from anthropic.com

In other words, a hallmark of early technological adoption is that it is concentrated—in both a small number of geographic regions and a small number of tasks in firms. As we document in this report, AI adoption appears to be following a similar pattern in the 21st century, albeit on shorter timelines and with greater intensity than the diffusion of technologies in the 20th century.

To study such patterns of early AI adoption, we extend the Anthropic Economic Index along two important dimensions, introducing a geographic analysis of Claude.ai conversations and a first-of-its-kind examination of enterprise API use. We show how Claude usage has evolved over time, how adoption patterns differ across regions, and—for the first time—how firms are deploying frontier AI to solve business problems.


How human-centric AI can shape the future of work — from weforum.org by Sapthagiri Chapalapalli

  • Last year, use of AI in the workplace increased by 5.5% in Europe alone.
  • AI adoption is accelerating, but success depends on empowering people, not just deploying technology.
  • Redesigning roles and workflows to combine human creativity and critical thinking with AI-driven insights is key.

The transformative potential of AI on business

Organizations are having to rapidly adapt their business models. Image: TCS


Using ChatGPT to get a job — from linkedin.com by Ishika Rawat

 

Artificial Intelligence in Vocational Education — from leonfurze.com by Leon Furze

The vocational education sector is incredibly diverse, covering everything from trades like building and construction, electrical, plumbing and automotive through to allied health, childcare, education, the creative arts and the technology industry. In Canberra, we heard from people representing every corner of the industry, including education, retail, tourism, finance and digital technologies. Every one of these industries is being impacted by the current AI boom.

A theme of the day was that whilst the vocational education sector is seen as a slow-moving beast with its own peculiar red tape, it is still possible to respond to emerging technologies like artificial intelligence, and there’s an imperative to do so.

Coming back to GenAI for small business owners, a qualified plumber running their own business, either as a solo operator or as manager of a team, probably doesn’t have many opportunities to keep up to date with the rapid developments of digital technologies. They’re far too busy doing their job.

So vocational education and training can be an initial space to develop some skills and understanding of the technology in a way which can be beneficial for managing that day-to-day job.


And speaking of the trade schools/vocational world…

Social media opens a window to traditional trades for young workers — from washingtonpost.com by Taylor Telford; this is a gifted article
Worker influencers are showing what life is like in fields such as construction, plumbing and manufacturing. Trade schools are trying to make the most of it.

Social media is increasingly becoming a destination for a new generation to learn about skilled trades — at a time when many have grown skeptical about the cost of college and the promise of white-collar jobs. These posts offer authentic insight as workers talk openly about everything from their favorite workwear to safety and payday routines.

The exposure is also changing the game for trade schools and employers in such industries as manufacturing and construction, which have long struggled to attract workers. Now, some are evolving their recruiting tactics by wading into content creation after decades of relying largely on word of mouth.

 

Fresh Voices on Legal Tech with Mathew Kerbis — from legaltalknetwork.com by Mathew Kerbis, Dennis Kennedy, and Tom Mighell

New approaches to legal service delivery are propelling us into the future. Don’t get left behind! AI and automations are making alternative service delivery easier and more efficient than ever. Dennis & Tom welcome Mathew Kerbis to learn more about his expertise in subscription-based legal services.


The Business Case For Legal Tech — from lexology.com

What a strong business case includes
A credible business case has three core elements: a clear problem statement, a defined solution, and a robust analysis of expected impact. It should also demonstrate that legal has done its homework and thought beyond implementation.

  1. Problem definition
  2. Current state analysis
  3. Solution overview
  4. Impact assessment
  5. Implementation plan
  6. Cost summary and ROI
  7. Strategic alignment

How AI is Revolutionizing Legal Technology in 2025 — from itmunch.com by Gaurav Uttamchandani

Table of Contents

  • What is AI in Legal Technology?
  • Key Use Cases of AI in the Legal Industry
    • 1. Contract Review & Management
    • 2. Legal Research & Case Analysis
    • 3. Litigation Prediction & Risk Assessment
    • 4. E-Discovery
    • 5. Legal Chatbots & Virtual Assistants
  • Benefits of AI in Legal Tech
  • Real-World Example: AI in Action
  • Implementing AI in Your Law Firm: Step-by-Step
  • Addressing Concerns Around AI in Law
  • LegalTech Trends to Watch in 2025
  • Final Thoughts
  • Call-to-Action (CTA)

 

Multiple Countries Just Issued Travel Warnings for the U.S. — from mensjournal.com by Rachel Dillin
In a rare reversal, several of America’s closest allies are now warning their citizens about traveling to the U.S., and it could impact your next trip.

For years, the U.S. has issued cautionary travel advisories to citizens heading overseas. But in a surprising twist, the roles have flipped. Several countries, including longtime allies like Australia, Canada, and the U.K., are now warning their citizens about traveling to the United States, according to Yahoo.

Australia updated its advisory in June, flagging gun violence, civil protests, and unpredictable immigration enforcement. While its guidance remains at Level 1 (“exercise normal safety precautions”), Australian officials urged travelers to stay alert in crowded places like malls, transit hubs, and public venues. They also warned about the Visa Waiver Program, noting that U.S. authorities can deny entry without explanation.

From DSC:
I’ve not heard of a travel warning against the U.S. in my lifetime. Thanks Trump. Making America Great Again. Sure thing….

 

AI & Schools: 4 Ways Artificial Intelligence Can Help Students — from the74million.org by W. Ian O’Byrne
AI creates potential for more personalized learning

I am a literacy educator and researcher, and here are four ways I believe these kinds of systems can be used to help students learn.

  1. Differentiated instruction
  2. Intelligent textbooks
  3. Improved assessment
  4. Personalized learning


5 Skills Kids (and Adults) Need in an AI World — from oreilly.com by Raffi Krikorian
Hint: Coding Isn’t One of Them

Five Essential Skills Kids Need (More than Coding)
I’m not saying we shouldn’t teach kids to code. It’s a useful skill. But these are the five true foundations that will serve them regardless of how technology evolves.

  1. Loving the journey, not just the destination
  2. Being a question-asker, not just an answer-getter
  3. Trying, failing, and trying differently
  4. Seeing the whole picture
  5. Walking in others’ shoes

The AI moment is now: Are teachers and students ready? — from iblnews.org

Day of AI Australia hosted a panel discussion on 20 May, 2025. Hosted by Dr Sebastian Sequoiah-Grayson (Senior Lecturer in the School of Computer Science and Engineering, UNSW Sydney) with panel members Katie Ford (Industry Executive – Higher Education at Microsoft), Tamara Templeton (Primary School Teacher, Townsville), Sarina Wilson (Teaching and Learning Coordinator – Emerging Technology at NSW Department of Education) and Professor Didar Zowghi (Senior Principal Research Scientist at CSIRO’s Data61).


Teachers using AI tools more regularly, survey finds — from iblnews.org

As many students face criticism and punishment for using artificial intelligence tools like ChatGPT for assignments, new reporting shows that many instructors are increasingly using those same programs.


Addendum on 5/28/25:

A Museum of Real Use: The Field Guide to Effective AI Use — from mikekentz.substack.com by Mike Kentz
Six Educators Annotate Their Real AI Use—and a Method Emerges for Benchmarking the Chats

Our next challenge is to self-analyze and develop meaningful benchmarks for AI use across contexts. This research exhibit aims to take the first major step in that direction.

With the right approach, a transcript becomes something else:

  • A window into student decision-making
  • A record of how understanding evolves
  • A conversation that can be interpreted and assessed
  • An opportunity to evaluate content understanding

This week, I’m excited to share something that brings that idea into practice.

Over time, I imagine a future where annotated transcripts are collected and curated. Schools and universities could draw from a shared library of real examples—not polished templates, but genuine conversations that show process, reflection, and revision. These transcripts would live not as static samples but as evolving benchmarks.

This Field Guide is the first move in that direction.


 


Excerpt:

 

A Stunning Image of the Australian Desert Illuminates the Growing Problem of Satellite Pollution — from thisiscolossal.com by Grace Ebert and Joshua Rozells

 

What trauma-informed practice is not — from timeshighereducation.com by Kate Cantrell, India Bryce, and Jessica Gildersleeve from The University of Southern Queensland
Before trauma-informed care can be the norm across all areas of the university, academic and professional staff need to understand what it is. Here, three academics debunk myths and demystify best practice

Recently, we conducted focus groups at our university to better ascertain how academics, administrators and student support staff perceive the purpose and value of trauma-informed practice, and how they perceive their capacity to contribute to organisational change.

We discovered that while most staff were united on the importance of trauma-informed care, several myths persist about what trauma-informed practice is (and is not). Some academic staff, for example, conflated teaching about trauma with trauma-informed teaching, confused trigger warnings with trigger points and, perhaps most alarmingly – given the prevalence of trauma exposure and risk among university students – misjudged trauma-informed practice as “the business of psychologists” rather than educators.

 

The Learning & Development Global Sentiment Survey 2025 — from donaldhtaylor.co.uk by Don Taylor

The L&D Global Sentiment Survey, now in its 12th year, once again asked two key questions of L&D professionals worldwide:

  • What will be hot in workplace learning in 2025?
  • What are your L&D challenges in 2025?

For the obligatory question on what they considered ‘hot’ topics, respondents voted for one to three of 15 suggested options, plus a free text ‘Other’ option. Over 3,000 voters participated from nearly 100 countries. 85% shared their challenges for 2025.

The results show more interest in AI, a renewed focus on showing the value of L&D, and some signs of greater maturity around our understanding of AI in L&D.


 

NVIDIA Partners With Industry Leaders to Advance Genomics, Drug Discovery and Healthcare — from nvidianews.nvidia.com
IQVIA, Illumina, Mayo Clinic and Arc Institute Harness NVIDIA AI and Accelerated Computing to Transform $10 Trillion Healthcare and Life Sciences Industry

J.P. Morgan Healthcare Conference—NVIDIA today announced new partnerships to transform the $10 trillion healthcare and life sciences industry by accelerating drug discovery, enhancing genomic research and pioneering advanced healthcare services with agentic and generative AI.

The convergence of AI, accelerated computing and biological data is turning healthcare into the largest technology industry. Healthcare leaders IQVIA, Illumina and Mayo Clinic, as well as Arc Institute, are using the latest NVIDIA technologies to develop solutions that will help advance human health.

These solutions include AI agents that can speed clinical trials by reducing administrative burden, AI models that learn from biology instruments to advance drug discovery and digital pathology, and physical AI robots for surgery, patient monitoring and operations. AI agents, AI instruments and AI robots will help address the $3 trillion of operations dedicated to supporting industry growth and create an AI factory opportunity in the hundreds of billions of dollars.


AI could transform health care, but will it live up to the hype? — from sciencenews.org by Meghan Rosen and Tina Hesman Saey
The technology has the potential to improve lives, but hurdles and questions remain

True progress in transforming health care will require solutions across the political, scientific and medical sectors. But new forms of artificial intelligence have the potential to help. Innovators are racing to deploy AI technologies to make health care more effective, equitable and humane.

AI could spot cancer early, design lifesaving drugs, assist doctors in surgery and even peer into people’s futures to predict and prevent disease. The potential to help people live longer, healthier lives is vast. But physicians and researchers must overcome a legion of challenges to harness AI’s potential.


HHS publishes AI Strategic Plan, with guidance for healthcare, public health, human services — from healthcareitnews.com by Mike Miliard
The framework explores ways to spur innovation and adoption, enable more trustworthy model development, promote access and foster AI-empowered healthcare workforces.

The U.S. Department of Health and Human Services has issued its HHS Artificial Intelligence Strategic Plan, which the agency says will “set in motion a coordinated public-private approach to improving the quality, safety, efficiency, accessibility, equitability and outcomes in health and human services through the innovative, safe, and responsible use of AI.”


How Journalism Will Adapt in the Age of AI — from bloomberg.com/ by John Micklethwait
The news business is facing its next enormous challenge. Here are eight reasons to be both optimistic and paranoid.

AI promises to get under the hood of our industry — to change the way we write and edit stories. It will challenge us, just like it is challenging other knowledge workers like lawyers, scriptwriters and accountants.

Most journalists love AI when it helps them uncover Iranian oil smuggling. Investigative journalism is not hard to sell to a newsroom. The second example is a little harder. Over the past month we have started testing AI-driven summaries for some longer stories on the Bloomberg Terminal.

The software reads the story and produces three bullet points. Customers like it — they can quickly see what any story is about. Journalists are more suspicious. Reporters worry that people will just read the summary rather than their story.

So, looking into our laboratory, what do I think will happen in the Age of AI? Here are eight predictions.


‘IT will become the HR of AI agents’, says Nvidia’s CEO: How should organisations respond? — from hrsea.economictimes.indiatimes.com by Vanshika Rastogi

Nvidia’s CEO, Jensen Huang’s recent statement “IT will become the HR of AI agents” continues to spark debate about IT’s evolving role in managing AI systems. As AI tools become integral, IT teams will take on tasks like training and optimising AI agents, blending technical and HR responsibilities. So, how should organisations respond to this transformation?

 

How Generative AI Is Shaping the Future of Law: Challenges and Trends in the Legal Profession — from thomsonreuters.com by Raghu Ramanathan

With this mind, Thomson Reuters and Lexpert hosted a panel featuring law firm leaders and industry experts discussing the challenges and trends around the use of generative AI in the legal profession.?Below are insights from an engaging and informative discussion.

Sections included:

  • Lawyers are excited to implement generative AI solutions
  • Unfounded concerns about robot lawyers
  • Changing billing practices and elevating services
  • Managing and mitigating risks

Adopting Legal Technology Responsibly — from lexology.com by Sacha Kirk

Here are fundamental principles to guide the process:

  1. Start with a Needs Assessment…
  2. Engage Stakeholders Early…
  3. Choose Scalable Solutions…
  4. Prioritise Security and Compliance…
  5. Plan for Change Management…

Modernizing Legal Workflows: The Role Of AI, Automation, And Strategic Partnerships — from abovethelaw.com by Scott Angelo, Jared Gullbergh, Nancy Griffing, and Michael Owen Hill
A roadmap for law firms.  

Angelo added, “We really doubled down on AI because it was just so new — not just to the legal industry, but to the world.” Under his leadership, Buchanan’s efforts to embrace AI have garnered significant attention, earning the firm recognition as one of the “Best of the Best for Generative AI” in the 2024 BTI “Leading Edge Law Firms” survey.

This acknowledgment reflects more than ambition; it highlights the firm’s ability to translate innovative ideas into actionable results. By focusing on collaboration and leveraging technology to address client demands, Buchanan has set a benchmark for what is possible in legal technology innovation.

The collective team followed these essential steps for app development:

  • Identify and Prioritize Use Cases…
  • Define App Requirements…
  • Leverage Pre-Built Studio Apps and Templates…
  • Incorporate AI and Automation…
  • Test and Iterate…
  • Deploy and Train…
  • Measure Success…

Navigating Generative AI in Legal Practice — from linkedin.com by Colin Levy

The rise of artificial intelligence (AI), particularly generative AI, has introduced transformative potential to legal practice. For in-house counsel, managing legal risk while driving operational efficiency increasingly involves navigating AI’s opportunities and challenges. While AI offers remarkable tools for automation and data-driven decision-making, it is essential to approach these tools as complementary to human judgment, not replacements. Effective AI adoption requires balancing its efficiencies with a commitment to ethical, nuanced legal practice.

Here a few ways in which this arises:

 

How AI Is Changing Education: The Year’s Top 5 Stories — from edweek.org by Alyson Klein

Ever since a new revolutionary version of chat ChatGPT became operable in late 2022, educators have faced several complex challenges as they learn how to navigate artificial intelligence systems.

Education Week produced a significant amount of coverage in 2024 exploring these and other critical questions involving the understanding and use of AI.

Here are the five most popular stories that Education Week published in 2024 about AI in schools.


What’s next with AI in higher education? — from msn.com by Science X Staff

Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI:

1. Teach ‘people’ skills as well as tech skills
2. Help all students use new tech
3. Prepare students for the jobs of the future
4. Learn to make sense of complex information
5. Universities to lead the tech change


5 Ways Teachers Can Use NotebookLM Today — from classtechtips.com by Dr. Monica Burns

 

What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by The Learning Network; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


15 Times to use AI, and 5 Not to — from oneusefulthing.org by Ethan Mollick
Notes on the Practical Wisdom of AI Use

There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience. Like any form of wisdom, using AI well requires holding opposing ideas in mind: it can be transformative yet must be approached with skepticism, powerful yet prone to subtle failures, essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn’t take this list too seriously except as inspiration – you know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities—and some scenarios where you should remain wary.


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


AI Predators: What Schools Should Know and Do — from techlearning.com by Erik Ofgang
AI is increasingly be used by predators to connect with underage students online. Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia, shares steps educators can take to protect students.

The threat from AI for students goes well beyond cheating, says Yasmin London, global online safety expert at Qoria and a former member of the New South Wales Police Force in Australia.

Increasingly at U.S. schools and beyond, AI is being used by predators to manipulate children. Students are also using AI generate inappropriate images of other classmates or staff members. For a recent report, Qoria, a company that specializes in child digital safety and wellbeing products, surveyed 600 schools across North America, UK, Australia, and New Zealand.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.


Increasing AI Fluency Among Enterprise Employees, Senior Management & Executives — from learningguild.com by Bill Brandon

This article attempts, in these early days, to provide some specific guidelines for AI curriculum planning in enterprise organizations.

The two reports identified in the first paragraph help to answer an important question. What can enterprise L&D teams do to improve AI fluency in their organizations?

You could be surprised how many software products have added AI features. Examples (to name a few) are productivity software (Microsoft 365 and Google Workspace); customer relationship management (Salesforce and Hubspot); human resources (Workday and Talentsoft); marketing and advertising (Adobe Marketing Cloud and Hootsuite); and communication and collaboration (Slack and Zoom). Look for more under those categories in software review sites.

 
© 2025 | Daniel Christian