AI agents: Where are they now? From proof of concept to success stories — from hrexecutive.com by Jill Barth
The 4 Rs framework
Salesforce has developed what Holt Ware calls the “4 Rs for AI agent success.” They are:
- Redesign by combining AI and human capabilities. This requires treating agents like new hires that need proper onboarding and management.
- Reskilling should focus on learning future skills. “We think we know what they are,” Holt Ware notes, “but they will continue to change.”
- Redeploy highly skilled people to determine how roles will change. When Salesforce launched an AI coding assistant, Holt Ware recalls, “We woke up the next day and said, ‘What do we do with these people now that they have more capacity?’ ” Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.
- Rebalance workforce planning. Holt Ware references a CHRO who “famously said that this will be the last year we ever do workforce planning and it’s only people; next year, every team will be supplemented with agents.”
Synthetic Reality Unleashed: AI’s powerful Impact on the Future of Journalism — from techgenyz.com by Sreyashi Bhattacharya
Table of Contents
- Highlights
- What is “synthetic news”?
- Examples in action
- Why are newsrooms experimenting with synthetic tools
- Challenges and Risks
- What does the research say
- Transparency seems to matter. —What is next: trends & future
- Conclusion
The latest video generation tool from OpenAI –> Sora 2
Sora 2 is here — from openai.com
Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.
And a video on this out at YouTube:
Per The Rundown AI:
The Rundown: OpenAI just released Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new social app where users can create, remix, and insert themselves into AI videos through a “Cameos” feature.
…
Why it matters: Model-wise, Sora 2 looks incredible — pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.
OpenAI Just Dropped Sora 2 (And a Whole New Social App) — from heneuron.ai by Grant Harvey
OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today’s addictive scroll machines.
What Sora 2 can do
- Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.
- Follow intricate multi-shot instructions while maintaining world state across scenes.
- Create realistic background soundscapes, dialogue, and sound effects automatically.
- Insert YOU into any video after a quick one-time recording (they call this “cameos”).
The best video to show what it can do is probably this one, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day…
Sora 2: AI Video Goes Social — from getsuperintel.com by Kim “Chubby” Isenberg
OpenAI’s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips
Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.
Also along the lines of creating digital video, see:
What used to take hours in After Effects now takes just one text prompt. Tools like Google’s Nano Banana, Seedream 4, Runway’s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.
The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.
For creators, this means the skill ceiling is no longer defined by technical know-how, it’s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.
Bilawal Sidhu
OpenAI DevDay 2025: everything you need to know — from getsuperintel.com by Kim “Chubby” Isenberg
Apps Inside ChatGPT, a New Era Unfolds
Something big shifted this week. OpenAI just turned ChatGPT into a platform – not just a product. With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between “using AI” and “building with AI” is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn’t what AI can do anymore – it’s what you’ll make it do.
























