The Other Regulatory Time Bomb — from onedtech.philhillaa.com by Phil Hill
Higher ed in the US is not prepared for what’s about to hit in April for new accessibility rules

Most higher-ed leaders have at least heard that new federal accessibility rules are coming in 2026 under Title II of the ADA, but it is apparent from conversations at the WCET and Educause annual conferences that very few understand what that actually means for digital learning and broad institutional risk. The rule isn’t some abstract compliance update: it requires every public institution to ensure that all web and media content meets WCAG 2.1 AA, including the use of audio descriptions for prerecorded video. Accessible PDF documents and video captions alone will no longer be enough. Yet on most campuses, the conversation has been understood only as a buzzword, delegated to accessibility coordinators and media specialists who lack the budget or authority to make systemic changes.

And no, relying on faculty to add audio descriptions en masse is not going to happen.

The result is a looming institutional risk that few presidents, CFOs, or CIOs have even quantified.

 
 

OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systems — from openai.com

  • Strategic partnership enables OpenAI to build and deploy at least 10 gigawatts of AI datacenters with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure.
  • To support the partnership, NVIDIA intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.
  • The first gigawatt of NVIDIA systems will be deployed in the second half of 2026 on NVIDIA’s Vera Rubin platform.

Also on Nvidia’s site here.

The Neuron Daily comments on this partnership here and also see their thoughts here:

Why this matters: The partnership kicks off in the second half of 2026 with NVIDIA’s new Vera Rubin platform. OpenAI will use this massive compute power to train models beyond what we’ve seen with GPT-5 and likely also power what’s called inference (when you ask a question to chatGPT, and it gives you an answer). And NVIDIA gets a guaranteed customer for their most advanced chips. Infinite money glitch go brrr am I right? Though to be fair, this kinda deal is as old as the AI industry itself.

This isn’t just about bigger models, mind you: it’s about infrastructure for what both companies see as the future economy. As Sam Altman put it, “Compute infrastructure will be the basis for the economy of the future.”

Our take: We think this news is actually super interesting when you pair it with the other big headline from today: Commonwealth Fusion Systems signed a commercial deal worth more than $1B with Italian energy company Eni to purchase fusion power from their 400 MW ARC plant in Virginia. Here’s what that means for AI…

…and while you’re on that posting from The Neuron Daily, also see this piece:

AI filmmaker Dinda Prasetyo just released “Skyland,” a fantasy short film about a guy named Aeryn and his “loyal flying fish”, and honestly, the action sequences look like they belong in an actual film…

What’s wild is that Dinda used a cocktail of AI tools (Adobe FireflyMidJourney, the newly launched Luma Ray 3, and ElevenLabs) to create something that would’ve required a full production crew just two years ago.


The Era of Prompts Is Over. Here’s What Comes Next. — from builtin.com by Ankush Rastogi
If you’re still prompting your AI, you’re behind the curve. Here’s how to prepare for the coming wave of AI agents.

Summary: Autonomous AI agents are emerging as systems that handle goals, break down tasks and integrate with tools without constant prompting. Early uses include call centers, healthcare, fraud detection and research, but concerns remain over errors, compliance risks and unchecked decisions.

The next shift is already peeking around the corner, and it’s going to make prompts look primitive. Before long, we won’t be typing carefully crafted requests at all. We’ll be leaning on autonomous AI agents, systems that don’t just spit out answers but actually chase goals, make choices and do the boring middle steps without us guiding them. And honestly, this jump might end up dwarfing the so-called “prompt revolution.”


Chrome: The browser you love, reimagined with AI — from blog.google by Parisa Tabriz

A new way to get things done with your AI browsing assistant
Imagine you’re a student researching a topic for a paper, and you have dozens of tabs open. Instead of spending hours jumping between sources and trying to connect the dots, your new AI browsing assistant — Gemini in Chrome 1 — can do it for you. Gemini can answer questions about articles, find references within YouTube videos, and will soon be able to help you find pages you’ve visited so you can pick up exactly where you left off.

Rolling out to Mac and Windows users in the U.S. with their language set to English, Gemini in Chrome can understand the context of what you’re doing across multiple tabs, answer questions and integrate with other popular Google services, like Google Docs and Calendar. And it’ll be available on both Android and iOS soon, letting you ask questions and summarize pages while you’re on the go.

We’re also developing more advanced agentic capabilities for Gemini in Chrome that can perform multi-step tasks for you from start to finish, like ordering groceries. You’ll remain in control as Chrome handles the tedious work, turning 30-minute chores into 3-click user journeys.


 

Digital Accessibility with Amy Lomellini — from intentionalteaching.buzzsprout.com by Derek Bruff

In this episode, we explore why digital accessibility can be so important to the student experience. My guest is Amy Lomellini, director of accessibility at Anthology, the company that makes the learning management system Blackboard. Amy teaches educational technology as an adjunct at Boise State University, and she facilitates courses on digital accessibility for the Online Learning Consortium. In our conversation, we talk about the importance of digital accessibility to students, moving away from the traditional disclosure-accommodation paradigm, AI as an assistive technology, and lots more.

 

These 40 Jobs May Be Replaced by AI. These 40 Probably Won’t — from inc.com by Bruce Crumley
A new Microsoft report ranks 80 professions by their risk of being replaced by AI tools.

A new study measuring the use of generative artificial intelligence in different professions has just gone public, and its main message to people working in some fields is harsh. It suggests translators, historians, text writers, sales representatives, and customer service agents might want to consider new careers as pile driver or dredge operators, railroad track layers, hardwood floor sanders, or maids — if, that is, they want to lower the threat of AI apps pushing them out of their current jobs.

From DSC:
Unfortunately, this is where the hyperscalers are going to get their ROI from all of the capital expenditures that they are making. Companies are going to use their services in order to reduce headcount at their organizations. CEOs are even beginning to brag about the savings that are realized by the use of AI-based technologies: (or so they claim.)

“As a CEO myself, I can tell you, I’m extremely excited about it. I’ve laid off employees myself because of AI. AI doesn’t go on strike. It doesn’t ask for a pay raise. These things that you don’t have to deal with as a CEO.”

My first position out of college was being a Customer Service Representative at Baxter Healthcare. It was my most impactful job, as it taught me the value of a customer. From then on, whoever I was trying to assist was my customer — whether they were internal or external to the organization that I was working for. Those kinds of jobs are so important. If they evaporate, what then? How will young people/graduates get their start? 

Also related/see:


Microsoft’s Edge Over the Web, OpenAI Goes Back to School, and Google Goes Deep — from thesignal.substack.com by Alex Banks

Alex’s take: We’re seeing browsers fundamentally transition from search engines ? answer engines ? action engines. Gone are the days of having to trawl through pages of search results. Commands are the future. They are the direct input to arrive at the outcomes we sought in the first place, such as booking a hotel or ordering food. I’m interested in watching Microsoft’s bet develop as browsers become collaborative (and proactive) assistants.


Everyone’s an (AI) TV showrunner now… — from theneurondaily.com by Grant Harvey

Amazon just invested in an AI that can create full TV episodes—and it wants you to star in them.

Remember when everyone lost their minds over AI generating a few seconds of video? Well, Amazon just invested in a company called Fable Studio whose system called Showrunner can generates entire 22-minute TV episodes.

Where does this go from here? Imagine asking AI to rewrite the ending of Game of Thrones, or creating a sitcom where you and your friends are the main characters. This type of tech could create personalized entertainment experiences just like that.

Our take: Without question, we’re moving toward a world where every piece of media can be customized to you personally. Your Netflix could soon generate episodes where you’re the protagonist, with storylines tailored to your interests and sense of humor.

And if this technology scales, the entire entertainment industry could flip upside down. The pitch goes: why watch someone else’s story when you can generate your own? 


The End of Work as We Know It — from gizmodo.com by Luc Olinga
CEOs call it a revolution in efficiency. The workers powering it call it a “new era in forced labor.” I spoke to the people on the front lines of the AI takeover.

Yet, even in this vision of a more pleasant workplace, the specter of displacement looms large. Miscovich acknowledges that companies are planning for a future where headcount could be “reduced by 40%.” And Clark is even more direct. “A lot of CEOs are saying that, knowing that they’re going to come up in the next six months to a year and start laying people off,” he says. “They’re looking for ways to save money at every single company that exists.”

But we do not have much time. As Clark told me bluntly: “I am hired by CEOs to figure out how to use AI to cut jobs. Not in ten years. Right now.”


AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’ — from wsj.com by Chip Cutter; behind a paywall
If AI can analyze information, crunch data and deliver a slick PowerPoint deck within seconds, how does the biggest name in consulting stay relevant?


ChatGPT users shocked to learn their chats were in Google search results — from arstechnica.com by Ashley Belanger
OpenAI scrambles to remove personal ChatGPT conversations from Google results

Faced with mounting backlash, OpenAI removed a controversial ChatGPT feature that caused some users to unintentionally allow their private—and highly personal—chats to appear in search results.

Fast Company exposed the privacy issue on Wednesday, reporting that thousands of ChatGPT conversations were found in Google search results and likely only represented a sample of chats “visible to millions.” While the indexing did not include identifying information about the ChatGPT users, some of their chats did share personal details—like highly specific descriptions of interpersonal relationships with friends and family members—perhaps making it possible to identify them, Fast Company found.


Character.AI Launches World’s First AI-Native Social Feed — from blog.character.ai

Today, we’re dropping the world’s first AI-native social feed.

Feed from Character.AI is a dynamic, scrollable content platform that connects users with the latest Characters, Scenes, Streams, and creator-driven videos in one place.

This is a milestone in the evolution of online entertainment.

For the last 10 years, social platforms have been all about passive consumption. The Character.AI Feed breaks that paradigm and turns content into a creative playground. Every post is an invitation to interact, remix, and build on what others have made. Want to rewrite a storyline? Make yourself the main character? Take a Character you just met in someone else’s Scene and pop it into a roast battle or a debate? Now it’s easy. Every story can have a billion endings, and every piece of content can change and evolve with one tap.

 

Digital Accessibility in 2025: A Screen Reader User’s Honest Take — from blog.usablenet.com by Michael Taylor

In this post, part of the UsableNet 25th anniversary series, I’m taking a look at where things stand in 2025. I’ll discuss the areas that have improved—such as online shopping, banking, and social media—and the ones that still make it challenging to perform basic tasks, including travel, healthcare, and mobile apps. I hope that by sharing what works and what doesn’t, I can help paint a clearer picture of the digital world as it stands today.


Why EAA Compliance and Legal Trends Are Shaping Accessibility in 2025 — from blog.usablenet.com by Jason Taylor

On June 28, 2025, the European Accessibility Act (EAA) officially became enforceable across the European Union. This law requires digital products and services—including websites, mobile apps, e-commerce platforms, and software to meet the defined accessibility standards outlined in EN 301 549, which aligns with the WCAG 2.1 Level AA.

Companies that serve EU consumers must be able to demonstrate that accessibility is built into the design, development, testing, and maintenance of their digital products and services.

This milestone also arrives as UsableNet celebrates 25 years of accessibility leadership—a moment to reflect on how far we’ve come and what digital teams must do next.

 

DC: I’m not necessarily recommending this, but the next two items point out how the use of agents continues to move forward:

The Future is Here: Visa Announces New Era of Commerce Featuring AI

  • Global leader brings its trusted brand and powerful network to enable payments with new technologies
  • Launches new innovations and partnerships to drive flexibility, security and acceptance

SAN FRANCISCO–(BUSINESS WIRE)–The future of commerce is on display at the Visa Global Product Drop with powerful AI-enabled advancements allowing consumers to find and buy with AI plus the introduction of new strategic partnerships and product innovations.

Also related/see:

Find and Buy with AI: Visa Unveils New Era of Commerce — from businesswire.com

  • Collaborates with Anthropic, IBM, Microsoft, Mistral AI, OpenAI, Perplexity, Samsung, Stripe and more
  • Will make shopping experiences more personal, more secure and more convenient as they become powered by AI

Introduced [on April 30th] at the Visa Global Product Drop, Visa Intelligent Commerce enables AI to find and buy. It is a groundbreaking new initiative that opens Visa’s payment network to the developers and engineers building the foundational AI agents transforming commerce.


AI agents are the new buyers. How can you market to them? — from aiwithallie.beehiiv.com by Allie Miller
You’re optimizing for people. But the next buyers are bots.

In today’s newsletter, I’m unpacking why your next major buyers won’t be people at all. They’ll be AI agents, and your brand might already be invisible to them. We’ll dig into why traditional marketing strategies are breaking down in the age of autonomous AI shoppers, what “AI optimization” (AIO) really means, and the practical steps you can take right now to make sure your business stays visible and competitive as the new digital gatekeepers take over more digital tasks.

AI platforms and AI agents—the digital assistants that browse and actually do things powered by models like GPT-4o, Claude 3.7 Sonnet, and Gemini 2.5 Pro—are increasingly becoming the gatekeepers between your business and potential customers.

“AI is the new front door to your business for millions of consumers.”

The 40-Point (ish) AI Agent Marketing Playbook 
Here’s the longer list. I went ahead and broke these into four categories so you can more easily assign owners: Content, Structure & Design, Technical & Dev, and AI Strategy & Testing. I look forward to seeing how this space, and by extension my advice, changes in the coming months.


Microsoft CEO says up to 30% of the company’s code was written by AI — from techcrunch.com by Maxwell Zeff

During a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference on Tuesday, Microsoft CEO Satya Nadella said that 20% to 30% of code inside the company’s repositories was “written by software” — meaning AI.


The Top 100Gen AI Consumer Apps — from a16z.com

In just six months, the consumer AI landscape has been redrawn. Some products surged, others stalled, and a few unexpected players rewrote the leaderboard overnight. Deepseek rocketed from obscurity to a leading ChatGPT challenger. AI video models advanced from experimental to fairly dependable (at least for short clips!). And so-called “vibe coding” is changing who can create with AI, not just who can use it. The competition is tighter, the stakes are higher, and the winners aren’t just launching, they’re sticking.

We turned to the data to answer: Which AI apps are people actively using? What’s actually making money, beyond being popular? And which tools are moving beyond curiosity-driven dabbling to become daily staples?

This is the fourth installment of the Top 100 Gen AI Consumer Apps, our bi-annual ranking of the top 50 AI-first web products (by unique monthly visits, per Similarweb) and top 50 AI-first mobile apps (by monthly active users, per Sensor Tower). Since our last report in August 2024, 17 new companies have entered the rankings of top AI-first web products.


Deep Research with AI: 9 Ways to Get Started — from wondertools.substack.com by Jeremy Caplan
Practical strategies for thorough, citation-rich AI research

The AI search landscape is transforming at breakneck speed. New “Deep Research” tools from ChatGPT, Gemini and Perplexity autonomously search and gather information from dozens — even hundreds — of sites, then analyze and synthesize it to produce comprehensive reports. While a human might take days or weeks to produce these 30-page citation-backed reports, AI Deep Research reports are ready in minutes.

What’s in this post

    • Examples of each report type I generated for my research, so you can form your own impressions.
    • Tips on why & how to use Deep Research and how to craft effective queries.
    • Comparison of key features and strengths/limitations of the top platforms

AI Agents Are Here—So Are the Threats: Unit 42 Unveils the Top 10 AI Agent Security Risks — from marktechpost.com

As AI agents transition from experimental systems to production-scale applications, their growing autonomy introduces novel security challenges. In a comprehensive new report, AI Agents Are Here. So Are the Threats,” Palo Alto Networks’ Unit 42 reveals how today’s agentic architectures—despite their innovation—are vulnerable to a wide range of attacks, most of which stem not from the frameworks themselves, but from the way agents are designed, deployed, and connected to external tools.

To evaluate the breadth of these risks, Unit 42 researchers constructed two functionally identical AI agents—one built using CrewAI and the other with AutoGen. Despite architectural differences, both systems exhibited the same vulnerabilities, confirming that the underlying issues are not framework-specific. Instead, the threats arise from misconfigurations, insecure prompt design, and insufficiently hardened tool integrations—issues that transcend implementation choices.


LLMs Can Learn Complex Math from Just One Example: Researchers from University of Washington, Microsoft, and USC Unlock the Power of 1-Shot Reinforcement Learning with Verifiable Reward — from marktechpost.com by Sana Hassan


 

 

Is collaboration the key to digital accessibility? — from timeshighereducation.com by Sal Jarvis and George Rhodes
Digital accessibility is ethically important, and a legal requirement, but it’s also a lot of work. Here’s how universities can collaborate and pool their expertise to make higher education accessible for all

How easy do you find it to navigate your way around your university’s virtual estate – its websites, virtual learning environment and other digital aspects? If the answer is “not very”, we suspect you may not be alone. And for those of us who might access it differently – without a mouse, for example, or through a screen reader or keyboard emulator – the challenge is multiplied. Digital accessibility is the wide-ranging work to make these challenges a thing of the past for everyone. It is a legal requirement and a moral imperative.

Make Things Accessible is the outcome of a collaboration, initially between the University of Westminster and UCL, but now incorporating many other universities. It is a community of practice, a website and an archive of resources. It aims to make things accessible for all.

 

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Per The Rundown: OpenAI just launched a surprising new way to access ChatGPT — through an old-school 1-800 number & also rolled out a new WhatsApp integration for global users during Day 10 of the company’s livestream event.


How Agentic AI is Revolutionizing Customer Service — from customerthink.com by Devashish Mamgain

Agentic AI represents a significant evolution in artificial intelligence, offering enhanced autonomy and decision-making capabilities beyond traditional AI systems. Unlike conventional AI, which requires human instructions, agentic AI can independently perform complex tasks, adapt to changing environments, and pursue goals with minimal human intervention.

This makes it a powerful tool across various industries, especially in the customer service function. To understand it better, let’s compare AI Agents with non-AI agents.

Characteristics of Agentic AI

    • Autonomy: Achieves complex objectives without requiring human collaboration.
    • Language Comprehension: Understands nuanced human speech and text effectively.
    • Rationality: Makes informed, contextual decisions using advanced reasoning engines.
    • Adaptation: Adjusts plans and goals in dynamic situations.
    • Workflow Optimization: Streamlines and organizes business workflows with minimal oversight.

Clio: A system for privacy-preserving insights into real-world AI use — from anthropic.com

How, then, can we research and observe how our systems are used while rigorously maintaining user privacy?

Claude insights and observations, or “Clio,” is our attempt to answer this question. Clio is an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai in a way that’s analogous to tools like Google Trends. It’s also already helping us improve our safety measures. In this post—which accompanies a full research paper—we describe Clio and some of its initial results.


Evolving tools redefine AI video — from heatherbcooper.substack.com by Heather Cooper
Google’s Veo 2, Kling 1.6, Pika 2.0 & more

AI video continues to surpass expectations
The AI video generation space has evolved dramatically in recent weeks, with several major players introducing groundbreaking tools.

Here’s a comprehensive look at the current landscape:

  • Veo 2…
  • Pika 2.0…
  • Runway’s Gen-3…
  • Luma AI Dream Machine…
  • Hailuo’s MiniMax…
  • OpenAI’s Sora…
  • Hunyuan Video by Tencent…

There are several other video models and platforms, including …

 

Three items re: accessibility from boia.org


How Important Are Fonts for Digital Accessibility?

With that said, simple sans-serif fonts are generally easier to read and understand. That includes popular fonts like:

    • Times New Roman
    • Arial
    • Tahoma
    • Helvetica
    • Calibri
    • Verdana

If you decide to use serif fonts, use them sparingly. For most body text, you should use a sans serif font with appropriate spacing and weight.

Follow these tips:


Why Web Accessibility Frustrates Developers (And How to Fix It) 

When developers view accessibility as an integral part of their work, the process of building inclusive websites becomes less of a chore and more of a rewarding challenge. By embracing tools like semantic HTML and incorporating user feedback from people with disabilities, developers can create solutions that enhance real user experiences while conforming with WCAG.

Starting with accessibility in mind from day one streamlines workflows, reduces the need for extensive remediation later on, and ultimately leads to more robust and inclusive digital products. To learn more, download our free eBook: Developing the Accessibility Mindset.


How to Respond to an ADA Web Accessibility Demand Letter

An excerpt from the “Learn the basics of digital accessibility” section:

We realize that we just threw a bunch of information at you — but we promise, the principles of WCAG aren’t too complicated. Here are some resources to help you learn the basics:

As you learn about digital accessibility, you’ll feel more comfortable reviewing your own content for potential barriers. The W3C’s Understanding WCAG 2.2 documents are an extremely useful resource for learning about specific barriers (and techniques for fixing them).

 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 

How can schools prepare for ADA digital accessibility requirements? — from k12dive.com by Kara Arundel
A new U.S. Department of Justice rule aims to ensure that state and local government web content and mobile apps are accessible for people with disabilities.

A newly issued federal rule to ensure web content and mobile apps are accessible for people with disabilities will require public K-12 and higher education institutions to do a thorough inventory of their digital materials to make sure they are in compliance, accessibility experts said.

The update to regulations for Title II of the Americans with Disabilities Act, published April 24 by the U.S. Department of Justice, calls for all state and local governments to verify that their web content — including mobile apps and social media postings — is accessible for those with vision, hearing, cognitive and manual dexterity disabilities.

 

Anthropic Introduces Claude 3.5 Sonnet — from anthropic.com

Anthropic Introduces Claude 3.5 Sonnet

What’s new? 
  • Frontier intelligence
    Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions and is exceptional at writing high-quality content with a natural, relatable tone.
  • 2x speed
  • State-of-the-art vision
  • Introducing Artifacts—a new way to use Claude
    We’re also introducing Artifacts on claude.ai, a new feature that expands how you can interact with Claude. When you ask Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside your conversation. This creates a dynamic workspace where you can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into your projects and workflows.

Train Students on AI with Claude 3.5 — from automatedteach.com by Graham Clay
I show how and compare it to GPT-4o.

  • If you teach computer science, user interface design, or anything involving web development, you can have students prompt Claude to produce web pages’ source code, see this code produced on the right side, preview it after it has compiled, and iterate through code+preview combinations.
  • If you teach economics, financial analysis, or accounting, you can have students prompt Claude to create analyses of markets or businesses, including interactive infographics, charts, or reports via React. Since it shows its work with Artifacts, your students can see how different prompts result in different statistical analyses, different representations of this information, and more.
  • If you teach subjects that produce purely textual outputs without a code intermediary, like philosophy, creative writing, or journalism, your students can compare prompting techniques, easily review their work, note common issues, and iterate drafts by comparing versions.

I see this as the first serious step towards improving the otherwise terrible user interfaces of LLMs for broad use. It may turn out to be a small change in the grand scheme of things, but it sure feels like a big improvement — especially in the pedagogical context.


And speaking of training students on AI, also see:

AI Literacy Needs to Include Preparing Students for an Unknown World — from stefanbauschard.substack.com by Stefan Bauschard
Preparing students for it is easier than educators think

Schools could enhance their curricula by incorporating debate, Model UN and mock government programs, business plan competitions, internships and apprenticeships, interdisciplinary and project-based learning initiatives, makerspaces and innovation labs, community service-learning projects, student-run businesses or non-profits, interdisciplinary problem-solving challenges, public speaking, and presentation skills courses, and design thinking workshop.

These programs foster essential skills such as recognizing and addressing complex challenges, collaboration, sound judgment, and decision-making. They also enhance students’ ability to communicate with clarity and precision, while nurturing creativity and critical thinking. By providing hands-on, real-world experiences, these initiatives bridge the gap between theoretical knowledge and practical application, preparing students more effectively for the multifaceted challenges they will face in their future academic and professional lives.

 



Addendum on 6/28/24:

Collaborate with Claude on Projects — from anthropic.com

Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. As a step in this direction, Claude.ai Pro and Team users can now organize their chats into Projects, bringing together curated sets of knowledge and chat activity in one place—with the ability to make their best chats with Claude viewable by teammates. With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results.

Projects are available on Claude.ai for all Pro and Team customers, and can be powered by Claude 3.5 Sonnet, our latest release which outperforms its peers on a wide variety of benchmarks. Each project includes a 200K context window, the equivalent of a 500-page book, so users can add all of the relevant documents, code, and insights to enhance Claude’s effectiveness.

 

Video, Images and Sounds – Good Tools #14 — from goodtools.substack.com by Robin Good

Specifically in this issue:

  • Free Image Libraries
  • Image Search Engines
  • Free Illustrations
  • Free Icons
  • Free Stock Video Footage
  • Free Music for Video and Podcasts
 

New models and developer products announced at DevDay — from openai.com
GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)


Introducing GPTs — from openai.com
You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.




OpenAI’s New Groundbreaking Update — from newsletter.thedailybite.co
Everything you need to know about OpenAI’s update, what people are building, and a prompt to skim long YouTube videos…

But among all this exciting news, the announcement of user-created “GPTs” took the cake.

That’s right, your very own personalized version of ChatGPT is coming, and it’s as groundbreaking as it sounds.

OpenAI’s groundbreaking announcement isn’t just a new feature – it’s a personal AI revolution. 

The upcoming customizable “GPTs” transform ChatGPT from a one-size-fits-all to a one-of-a-kind digital sidekick that is attuned to your life’s rhythm. 


Lore Issue #56: Biggest Week in AI This Year — from news.lore.com by Nathan Lands

First, Elon Musk announced “Grok,” a ChatGPT competitor inspired by “The Hitchhiker’s Guide to the Galaxy.” Surprisingly, in just a few months, xAI has managed to surpass the capabilities of GPT-3.5, signaling their impressive speed of execution and establishing them as a formidable long-term contender.

Then, OpenAI hosted their inaugural Dev Day, unveiling “GPT-4 Turbo,” which boasts a 128k context window, API costs slashed by threefold, text-to-speech capabilities, auto-model switching, agents, and even their version of an app store slated for launch next month.


The Day That Changed Everything — from joinsuperhuman.ai by Zain Kahn
ALSO: Everything you need to know about yesterday’s OpenAI announcements

  • OpenAI DevDay Part I: Custom ChatGPTs and the App Store of AI
  • OpenAI DevDay Part II: GPT-4 Turbo, Assistants, APIs, and more

OpenAI’s Big Reveal: Custom GPTs, GPT Store & More — from  news.theaiexchange.com
What you should know about the new announcements; how to get started with building custom GPTs


Incredible pace of OpenAI — from theaivalley.com by Barsee
PLUS: Elon’s Gork


 

 
© 2025 | Daniel Christian