{"id":91988,"date":"2024-06-14T16:59:19","date_gmt":"2024-06-14T20:59:19","guid":{"rendered":"https:\/\/danielschristian.com\/learning-ecosystems\/?p=91988"},"modified":"2024-06-14T17:01:50","modified_gmt":"2024-06-14T21:01:50","slug":"several-items-re-text-to-video","status":"publish","type":"post","link":"https:\/\/danielschristian.com\/learning-ecosystems\/2024\/06\/14\/several-items-re-text-to-video\/","title":{"rendered":"Several items re: text-to-video (and even images-to-video)"},"content":{"rendered":"<p><a href=\"https:\/\/lumalabs.ai\/dream-machine\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" class=\"alignnone size-full wp-image-91991\" src=\"http:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2024\/06\/DreamMachine-LumaAI-June2024.jpg\" alt=\"\" width=\"100%\" height=\"100%\" srcset=\"https:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2024\/06\/DreamMachine-LumaAI-June2024.jpg 713w, https:\/\/danielschristian.com\/learning-ecosystems\/wp-content\/uploads\/2024\/06\/DreamMachine-LumaAI-June2024-150x66.jpg 150w\" sizes=\"(max-width: 713px) 100vw, 713px\" \/><\/a><\/p>\n<p style=\"padding-left: 40px;\"><a href=\"https:\/\/lumalabs.ai\/dream-machine\" target=\"_blank\" rel=\"noopener\"><strong>Dream Machine<\/strong><\/a> is an AI model that makes high quality, realistic videos fast from text and images.<\/p>\n<p style=\"padding-left: 40px;\">It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!<\/p>\n<hr \/>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\">Luma AI just dropped a Sora-like AI video generator called Dream Machine.<\/p>\n<p>But unlike Sora or KLING, it&#8217;s completely open access to the public.<\/p>\n<p>Here are 10 wild examples (and how to access it):<\/p>\n<p>1. <a href=\"https:\/\/t.co\/Dx5Pnbp7lg\">pic.twitter.com\/Dx5Pnbp7lg<\/a><\/p>\n<p>\u2014 Rowan Cheung (@rowancheung) <a href=\"https:\/\/twitter.com\/rowancheung\/status\/1800930932846641335?ref_src=twsrc%5Etfw\">June 12, 2024<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<hr \/>\n<p><a href=\"https:\/\/www.ai-supremacy.com\/p\/text-to-video-emergence-for-july\" target=\"_blank\" rel=\"noopener\"><strong>Text-to-Video Emergence for July 2024<\/strong><\/a> &#8212; from ai-supremacy.com by Michael Spencer<br \/>\n<em>Who needs Sora?<\/em><\/p>\n<p>There have been some incredible teasers in the text-to-video arena of Generative AI. Namely I\u2019m watching:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scmp.com\/tech\/big-tech\/article\/3265798\/chinas-no-2-short-video-app-kuaishou-unveils-sora-style-product-amid-rush-catch-ai\" rel=\"\">Kling AI<\/a>\u00a0(by Kuaishou)<\/li>\n<li><a href=\"https:\/\/siliconangle.com\/2024\/06\/12\/luma-ais-dream-machine-expands-access-generative-ai-video-creation\/\" rel=\"\">Luma AI<\/a><\/li>\n<li><a href=\"https:\/\/www.youtube.com\/watch?v=FneDGOVaHl0\" rel=\"\">Vidu<\/a>\u00a0(ShengShu Technology and Tsinghua University)<\/li>\n<li><a href=\"https:\/\/www.maginative.com\/article\/pika-labs-secures-80m-in-series-b-funding\/\" rel=\"\">Pika Labs<\/a><\/li>\n<li>Zhipu AI &amp; ByteDance (not yet released their products)<\/li>\n<li>The\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=BXsCN9FgbuA\" rel=\"\">timeline for the release of\u00a0<\/a>OpenAI\u2019s Sora<\/li>\n<\/ul>\n<hr \/>\n<blockquote class=\"reddit-embed-bq\" style=\"height:500px\" data-embed-height=\"586\"><p><a href=\"https:\/\/www.reddit.com\/r\/singularity\/comments\/1cyukau\/openai_seems_to_have_the_ability_to_create_video\/\">&#8220;OpenAI seems to have the ability to create video in Sora, send it to ChatGPT for a script, use Voice Engine for voice over and put it all together.&#8221;<\/a><br \/> by<a href=\"https:\/\/www.reddit.com\/user\/MassiveWasabi\/\">u\/MassiveWasabi<\/a> in<a href=\"https:\/\/www.reddit.com\/r\/singularity\/\">singularity<\/a><\/p><\/blockquote>\n<p><script async=\"\" src=\"https:\/\/embed.reddit.com\/widgets.js\" charset=\"UTF-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Dream Machine is an AI model that makes high quality, realistic videos fast from text and images. It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[113,329,356,314,433,271,8,286,9,210,141,408,23,321,367],"tags":[],"class_list":["post-91988","post","type-post","status-publish","format-standard","hentry","category-21st-century","category-24x7x365-access","category-artificial-intelligence-agents-llms-and-related","category-asia","category-communications","category-creativity","category-digital-audio","category-digital-learning","category-digital-video","category-emerging-technologies","category-engagement-engaging-students","category-mediafilm","category-multimedia","category-united-states","category-vendors"],"_links":{"self":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/91988","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/comments?post=91988"}],"version-history":[{"count":7,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/91988\/revisions"}],"predecessor-version":[{"id":91996,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/posts\/91988\/revisions\/91996"}],"wp:attachment":[{"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/media?parent=91988"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/categories?post=91988"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/danielschristian.com\/learning-ecosystems\/wp-json\/wp\/v2\/tags?post=91988"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}