{"id":96929,"date":"2025-10-06T15:34:49","date_gmt":"2025-10-06T19:34:49","guid":{"rendered":"https:\/\/danielschristian.com\/learning-ecosystems\/?p=96929"},"modified":"2025-10-06T16:06:50","modified_gmt":"2025-10-06T20:06:50","slug":"sora-2-and-other-video-generation-models-introduce-new-storytelling-capabilities-other-items-re-ai-in-general","status":"publish","type":"post","link":"https:\/\/danielschristian.com\/learning-ecosystems\/2025\/10\/06\/sora-2-and-other-video-generation-models-introduce-new-storytelling-capabilities-other-items-re-ai-in-general\/","title":{"rendered":"Sora 2 and other video generation models introduce new storytelling capabilities + other items re: AI in general"},"content":{"rendered":"<p><a href=\"https:\/\/hrexecutive.com\/ai-agents-where-are-they-now-from-proof-of-concept-to-success-stories\/\" target=\"_blank\" rel=\"noopener\"><strong>AI agents: Where are they now? From proof of concept to success stories<\/strong><\/a>\u00a0&#8212; from hrexecutive.com by Jill Barth<\/p>\n<p><strong>The 4 Rs framework<br \/>\n<\/strong>Salesforce has developed what Holt Ware calls the \u201c4 Rs for AI agent success.\u201d They are:<\/p>\n<ol>\n<li><strong>Redesign by combining AI and human capabilities.<\/strong>\u00a0This requires treating agents like new hires that need proper onboarding and management.<\/li>\n<li><strong>Reskilling should focus on learning future skills.<\/strong>\u00a0\u201cWe think we know what they are,\u201d Holt Ware notes, \u201cbut they will continue to change.\u201d<\/li>\n<li><strong>Redeploy highly skilled people to determine how roles will change.\u00a0<\/strong>When Salesforce launched an AI coding assistant, Holt Ware recalls, \u201cWe woke up the next day and said, \u2018What do we do with these people now that they have more capacity?\u2019 \u201d Their answer was to create an entirely new role: Forward-Deployed Engineers. This role has since played a growing part in driving customer success.<\/li>\n<li><strong>Rebalance workforce planning.<\/strong>\u00a0Holt Ware references a CHRO who \u201cfamously said that this will be the last year we ever do workforce planning and it\u2019s only people; next year, every team will be supplemented with agents.\u201d<\/li>\n<\/ol>\n<hr \/>\n<p><a href=\"https:\/\/techgenyz.com\/synthetic-reality-unleashed-ais-powerful-impac\/\" target=\"_blank\" rel=\"noopener\"><strong>Synthetic Reality Unleashed: AI\u2019s powerful Impact on the Future of Journalism<\/strong><\/a> &#8212; from techgenyz.com by Sreyashi Bhattacharya<\/p>\n<p><strong>Table of Contents <\/strong><\/p>\n<ul>\n<li>Highlights<\/li>\n<li>What is \u201csynthetic news\u201d?<\/li>\n<li>Examples in action<\/li>\n<li>Why are newsrooms experimenting with synthetic tools<\/li>\n<li>Challenges and Risks<\/li>\n<li>What does the research say\n<ul>\n<li>Transparency seems to matter. \u2014What is next: trends &amp; future<\/li>\n<\/ul>\n<\/li>\n<li>Conclusion<\/li>\n<\/ul>\n<hr \/>\n<p><span style=\"color: #800000;\"><strong>The latest video generation tool from OpenAI &#8211;&gt; Sora 2<\/strong><\/span><\/p>\n<p><a href=\"https:\/\/openai.com\/index\/sora-2\/\" target=\"_blank\" rel=\"noopener\"><strong>Sora 2 is here<\/strong><\/a>\u00a0&#8212; from openai.com<\/p>\n<p style=\"padding-left: 40px;\">Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app.<\/p>\n<p><em><span style=\"color: #800000;\">And a video on this out at YouTube:<\/span><\/em><\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/HLwCZo1pBkQ?si=9SY91kM3OdWiBVvL\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p><em>Per <a href=\"https:\/\/www.therundown.ai\/p\/sora-2-breaks-the-internet\" target=\"_blank\" rel=\"noopener\">The Rundown AI:<\/a><\/em><\/p>\n<p style=\"padding-left: 40px;\"><b>The Rundown:\u00a0<\/b>OpenAI just\u00a0<a href=\"https:\/\/link.mail.beehiiv.com\/ss\/c\/u001.eCbm_1zon7G0lMoXTECWa-IUY9yqSc2cx0km5OJXo-MDtgBx6p3NfcalZw1ZOR_IY_bl8sy22xz3rxP0YS-hZoCr8iONX4-A5KJYGTkWep6aVMX-JYK-9qtsJBJs6zNbUoDvHGzCVjgFSySq3aGi8vOsZZZxFG9UBYz6RbGv-MlEdG1Isd6cHNbbuYfrVxvvLlahWXMgYwGE-zitE2lUfobMt0FZ7PxQNDjh_J3owwXUvFjozg-O5rJ92LGBD7bH\/4kd\/6hRfkS28RDmpZT2ojsKadg\/h6\/h001.5CRAQNEW-BV9WdtS-9Q0JYI5H0QFzH1gshb4o7y1NR0\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/link.mail.beehiiv.com\/ss\/c\/u001.eCbm_1zon7G0lMoXTECWa-IUY9yqSc2cx0km5OJXo-MDtgBx6p3NfcalZw1ZOR_IY_bl8sy22xz3rxP0YS-hZoCr8iONX4-A5KJYGTkWep6aVMX-JYK-9qtsJBJs6zNbUoDvHGzCVjgFSySq3aGi8vOsZZZxFG9UBYz6RbGv-MlEdG1Isd6cHNbbuYfrVxvvLlahWXMgYwGE-zitE2lUfobMt0FZ7PxQNDjh_J3owwXUvFjozg-O5rJ92LGBD7bH\/4kd\/6hRfkS28RDmpZT2ojsKadg\/h6\/h001.5CRAQNEW-BV9WdtS-9Q0JYI5H0QFzH1gshb4o7y1NR0&amp;source=gmail&amp;ust=1759863636749000&amp;usg=AOvVaw3U-BthMeL8UI_ccm6EPyd1\">released<\/a>\u00a0Sora 2, its latest video model that now includes synchronized audio and dialogue, alongside a new\u00a0<a href=\"https:\/\/link.mail.beehiiv.com\/ss\/c\/u001.Q334NVcZU4O6L6VKRz8ijLRlaauIlYK-b0kvsVdPSgk8w9raoky6qxut4s8pC2q0l0BWmqvO4By2w5GmhafdyWM2JgLcYOouHcAU1DL_G4d59TcaSd6ueY0VGDVlgurgUQRvMUrnDwADs9PKV8r1GVcEy_e7K0dnCT0pY7GKcgqTViKF0p9WXc_3jHRtDT6QfcvngrvhjJAkJmU9F1fYDOBygIjpaOrnN8QrWGGXFWf73fsHDW3eT23ZNR_tUhMEDpC3QRg3TkBtqhmTBHkMvA\/4kd\/6hRfkS28RDmpZT2ojsKadg\/h7\/h001.lpXUqLBLOYKllkAtpbrN5cizdzgWLPkoFu3ySHPnKlQ\" target=\"_blank\" rel=\"noopener noreferrer nofollow\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/link.mail.beehiiv.com\/ss\/c\/u001.Q334NVcZU4O6L6VKRz8ijLRlaauIlYK-b0kvsVdPSgk8w9raoky6qxut4s8pC2q0l0BWmqvO4By2w5GmhafdyWM2JgLcYOouHcAU1DL_G4d59TcaSd6ueY0VGDVlgurgUQRvMUrnDwADs9PKV8r1GVcEy_e7K0dnCT0pY7GKcgqTViKF0p9WXc_3jHRtDT6QfcvngrvhjJAkJmU9F1fYDOBygIjpaOrnN8QrWGGXFWf73fsHDW3eT23ZNR_tUhMEDpC3QRg3TkBtqhmTBHkMvA\/4kd\/6hRfkS28RDmpZT2ojsKadg\/h7\/h001.lpXUqLBLOYKllkAtpbrN5cizdzgWLPkoFu3ySHPnKlQ&amp;source=gmail&amp;ust=1759863636749000&amp;usg=AOvVaw2Z7OcvWpWC-1_mrzMhlsfa\">social app<\/a>\u00a0where users can create, remix, and insert themselves into AI videos through a &#8220;Cameos&#8221; feature.<br \/>\n&#8230;<br \/>\n<b>Why it matters:<\/b>\u00a0Model-wise, Sora 2 looks incredible \u2014 pushing us even further into the uncanny valley and creating tons of new storytelling capabilities. Cameos feels like a new viral memetic tool, but time will tell whether the AI social app can overcome the slop-factor and have staying power past the initial novelty.<\/p>\n<hr \/>\n<p><a href=\"https:\/\/www.theneuron.ai\/explainer-articles\/openai-just-dropped-sora-2-and-a-whole-new-social-app?\" target=\"_blank\" rel=\"noopener\"><strong>OpenAI Just Dropped Sora 2 (And a Whole New Social App)<\/strong><\/a>\u00a0&#8212; from heneuron.ai by Grant Harvey<br \/>\n<em>OpenAI launched Sora 2 with a new iOS app that lets you insert yourself into AI-generated videos with realistic physics and sound, betting that giving users algorithm control and turning everyone into active creators will build a better social network than today&#8217;s addictive scroll machines.<\/em><\/p>\n<p><strong>What Sora 2 can do<\/strong><\/p>\n<ul role=\"list\">\n<li>Generate Olympic-level gymnastics routines, backflips on paddleboards (with accurate buoyancy!), and triple axels.<\/li>\n<li>Follow intricate multi-shot instructions while maintaining world state across scenes.<\/li>\n<li>Create realistic background soundscapes, dialogue, and sound effects automatically.<\/li>\n<li>Insert YOU into any video after a quick one-time recording (they call this\u00a0<em>&#8220;cameos&#8221;<\/em>).<\/li>\n<\/ul>\n<p>The best video to show what it can do\u00a0<a href=\"https:\/\/x.com\/gabrielpeterss4\/status\/1973071380842229781\" target=\"_blank\" rel=\"noopener\">is probably this one<\/a>, from OpenAI researcher Gabriel Peters, that depicts the behind the scenes of Sora 2 launch day\u2026<\/p>\n<hr \/>\n<p><a href=\"https:\/\/getsuperintel.com\/p\/sora-2-ai-video-goes-social\" target=\"_blank\" rel=\"noopener\"><strong>Sora 2: AI Video Goes Social<\/strong><\/a> &#8212; from getsuperintel.com by Kim &#8220;Chubby&#8221; Isenberg<br \/>\n<em>OpenAI&#8217;s latest AI video model is now an iOS app, letting users generate, remix, and even insert themselves into cinematic clips<\/em><\/p>\n<p style=\"padding-left: 40px;\">Technically, Sora 2 is a major leap. It syncs audio with visuals, respects physics (a basketball bounces instead of teleporting), and follows multi-shot instructions with consistency. That makes outputs both more controllable and more believable. But the app format changes the game: it transforms world simulation from a research milestone into a social, co-creative experience where entertainment, creativity, and community intersect.<\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/getsuperintel.com\/p\/sora-2-ai-video-goes-social?\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone\" src=\"https:\/\/media.beehiiv.com\/cdn-cgi\/image\/fit=scale-down,quality=80,format=auto,onerror=redirect\/uploads\/asset\/file\/3e2b5d6a-17e5-4e0b-b9e2-4575373e646c\/Quote.png\" alt=\"\" width=\"590\" height=\"387\" \/><\/a><\/p>\n<hr \/>\n<p><em><span style=\"color: #800000;\">Also along the lines of creating digital video, see:<\/span><\/em><\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/DOWqEWdwQtM?si=97IqdmpDaRtirDNx\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p style=\"padding-left: 40px;\">What used to take hours in After Effects now takes just one text prompt. Tools like Google&#8217;s Nano Banana, Seedream 4, Runway\u2019s Aleph, and others are pioneering instruction-based editing, a breakthrough that collapses complex, multi-step VFX workflows into a single, implicit direction.<\/p>\n<blockquote><p><span style=\"color: #ff6600;\"><strong>The history of VFX is filled with innovations that removed friction, but collapsing an entire multi-step workflow into a single prompt represents a new kind of leap.<\/strong><\/span><\/p>\n<p><span style=\"color: #ff6600;\"><strong>For creators, this means the skill ceiling is no longer defined by technical know-how, it\u2019s defined by imagination. If you can describe it, you can create it. For the industry, it points toward a near future where small teams and solo creators compete with the scale and polish of large studios.<\/strong><\/span><\/p>\n<p style=\"text-align: right;\"><span style=\"color: #ff6600;\">Bilawal Sidhu<\/span><\/p>\n<\/blockquote>\n<hr \/>\n<p><a href=\"https:\/\/getsuperintel.com\/p\/openai-devday-2025-everything-you-need-to-know\" target=\"_blank\" rel=\"noopener\"><strong>OpenAI DevDay 2025: everything you need to know <\/strong><\/a>&#8212; from getsuperintel.com by Kim &#8220;Chubby&#8221; Isenberg<br \/>\nApps Inside ChatGPT, a New Era Unfolds<\/p>\n<p style=\"padding-left: 40px;\"><span style=\"color: #800000;\"><strong>Something big shifted this week. OpenAI just turned ChatGPT into a platform &#8211; not just a product.<\/strong> <\/span>With apps now running inside ChatGPT and a no-code Agent Builder for creating full AI workflows, the line between \u201cusing AI\u201d and \u201cbuilding with AI\u201d is fading fast. Developers suddenly have a new playground, and for the first time, anyone can assemble their own intelligent system without touching code. The question isn\u2019t what AI can do anymore &#8211; it\u2019s what you\u2019ll make it do.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI agents: Where are they now? From proof of concept to success stories\u00a0&#8212; from hrexecutive.com by Jill Barth The 4 Rs framework Salesforce has developed what Holt Ware calls the \u201c4 Rs for AI agent success.\u201d They are: Redesign by combining AI and human capabilities.\u00a0This requires treating agents like new hires that need proper onboarding [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[113,329,356,387,208,127,433,260,112,271,8,18,9,210,848,141,403,533,63,264,391,419,180,482,25,408,23,869,309,855,155,157,20,40,195,118,321,367,187,299],"tags":[],"class_list":["post-96929","post","type-post","status-publish","format-standard","hentry","category-21st-century","category-24x7x365-access","category-artificial-intelligence-agents-llms-and-related","category-business","category-cloud-based-computing-apps","category-collaboration","category-communications","category-content-development-aggregation-repositories","category-corporate-business-world","category-creativity","category-digital-audio","category-digital-storytelling","category-digital-video","category-emerging-technologies","category-emotion","category-engagement-engaging-students","category-ethics","category-experimentation","category-google","category-graphics","category-human-computer-interaction-hci","category-ideas-teaching","category-innovation","category-intelligent-systems","category-journalism","category-mediafilm","category-multimedia","category-open-ai","category-platforms","category-skills","category-story","category-storytelling","category-strategy","category-technologies-for-your-home","category-tools","category-training-corporate-universities","category-united-states","category-vendors","category-visualizing-information","category-workplace"],"_links":{"self":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/96929","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/comments?post=96929"}],"version-history":[{"count":12,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/96929\/revisions"}],"predecessor-version":[{"id":96992,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/96929\/revisions\/96992"}],"wp:attachment":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/media?parent=96929"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/categories?post=96929"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/tags?post=96929"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}