If you’re a teen, you could be exposed to conspiracy theories and a host of other pieces of misinformation as frequently as every day while scrolling through your social media feeds.
That’s according to a new study by the News Literacy Project, which also found that teens struggle with identifying false information online. This comes at a time when media literacy education isn’t available to most students, the report finds, and their ability to distinguish between objective and biased information sources is weak. The findings are based on responses from more than 1,000 teens ages 13 to 18.
“News literacy is fundamental to preparing students to become active, critically thinking members of our civic life — which should be one of the primary goals of a public education,” Kim Bowman, News Literacy Project senior research manager and author of the report, said in an email interview. “If we don’t teach young people the skills they need to evaluate information, they will be left at a civic and personal disadvantage their entire lives. News literacy instruction is as important as core subjects like reading and math.”
To help teach your students about news and media literacy, I highly recommend my sister Sue Ellen Christian’s work out atWonder Media.
.
.
.
There you will find numerous resources for libraries, schools, families and individuals. Suggestions of books, articles, other websites, and online materials to assist you in growing your media literacy and news media literacy are also included there.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises
Add sound to your video via text — Project Super Sonic:
New Dream Weaver — from aisecret.us Explore Adobe’s New Firefly Video Generative Model
Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.
AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds. Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Runway Partners with Lionsgate — from runwayml.com via The Rundown AI Runway and Lionsgate are partnering to explore the use of AI in film production.
Lionsgate and Runway have entered into a first-of-its-kind partnership centered around the creation and training of a new AI model, customized on Lionsgate’s proprietary catalog. Fundamentally designed to help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work, the model generates cinematic video that can be further iterated using Runway’s suite of controllable tools.
Per The Rundown:Lionsgate, the film company behind The Hunger Games, John Wick, and Saw, teamed up with AI video generation company Runway to create a custom AI model trained on Lionsgate’s film catalogue.
The details:
The partnership will develop an AI model specifically trained on Lionsgate’s proprietary content library, designed to generate cinematic video that filmmakers can further manipulate using Runway’s tools.
Lionsgate sees AI as a tool to augment and enhance its current operations, streamlining both pre-production and post-production processes.
Runway is considering ways to offer similar custom-trained models as templates for individual creators, expanding access to AI-powered filmmaking tools beyond major studios.
Why it matters: As many writers, actors, and filmmakers strike against ChatGPT, Lionsgate is diving head-first into the world of generative AI through its partnership with Runway. This is one of the first major collabs between an AI startup and a major Hollywood company — and its success or failure could set precedent for years to come.
Each prompt on ChatGPT flows through a server that runs thousands of calculations to determine the best words to use in a response.
In completing those calculations, these servers, typically housed in data centers, generate heat. Often, water systems are used to cool the equipment and keep it functioning. Water transports the heat generated in the data centers into cooling towers to help it escape the building, similar to how the human body uses sweat to keep cool, according to Shaolei Ren, an associate professor at UC Riverside.
Where electricity is cheaper, or water comparatively scarce, electricity is often used to cool these warehouses with large units resembling air-conditioners, he said. That means the amount of water andelectricity an individual query requires can depend on a data center’s location and vary widely.
AI, Humans and Work: 10 Thoughts.— from rishad.substack.com by Rishad Tobaccowala The Future Does Not Fit in the Containers of the Past. Edition 215.
10 thoughts about AI, Humans and Work in 10 minutes:
AI is still Under-hyped.
AI itself will be like electricity and is unlikely to be a differentiator for most firms.
AI is not alive but can be thought of as a new species.
Knowledge will be free and every knowledge workers job will change in 2025.
The key about AI is not to ask what AI will do to us but what AI can do for us.
This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.
My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?
Here’s where we are in September, 2024:
…
Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby, Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)
As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.
AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.
Right now, high schoolers and college students around the country are experimenting with free smartphone apps that help complete their math homework using generative AI. One of the most popular options on campus right now is the Gauth app, with millions of downloads. It’s owned by ByteDance, which is also TikTok’s parent company.
The Gauth app first launched in 2019 with a primary focus on mathematics, but soon expanded to other subjects as well, like chemistry and physics. It’s grown in relevance, and neared the top of smartphone download lists earlier this year for the education category. Students seem to love it. With hundreds of thousands of primarily positive reviews, Gauth has a favorable 4.8 star rating in the Apple App Store and Google Play Store.
All students have to do after downloading the app is point their smartphone at a homework problem, printed or handwritten, and then make sure any relevant information is inside of the image crop. Then Gauth’s AI model generates a step-by-step guide, often with the correct answer.
From DSC: I do hesitate to post this though, as I’ve seen numerous posting re: the dubious quality of AI as it relates to giving correct answers to math-related problems – or whether using AI-based tools help or hurt the learning process. The situation seems to be getting better, but as I understand it, we still have some progress to make in this area of mathematics.
Educational leaders must reconsider the definition of creativity, taking into account how generative AI tools can be used to produce novel and impactful creative work, similar to how film editors compile various elements into a cohesive, creative whole.
Generative AI democratizes innovation by allowing all students to become creators, expanding access to creative processes that were previously limited and fostering a broader inclusion of diverse talents and ideas in education.
AI-Powered Instructional Design at ASU — from drphilippahardman.substack.com by Dr. Philippa Hardman How ASU’s Collaboration with OpenAI is Reshaping the Role of Instructional Designers
The developments and experiments at ASU provide a fascinating window into two things:
How the world is reimagining learning in the age of AI;
How the role of the instructional designer is changing in the age of AI.
In this week’s blog post, I’ll provide a summary of how faculty, staff and students at ASU are starting to reimagine education in the age of AI, and explore what this means for the instructions designers who work there.
India’s ed-tech unicorn PhysicsWallah is using OpenAI’s GPT-4o to make education accessible to millions of students in India. Recently, the company launched a suite of AI products to ensure that students in Tier 2 & 3 cities can access high-quality education without depending solely on their enrolled institutions, as 85% of their enrollment comes from these areas.
Last year, AIM broke the news of PhysicsWallah introducing ‘Alakh AI’, its suite of generative AI tools, which was eventually launched at the end of December 2023. It quickly gained traction, amassing over 1.5 million users within two months of its release.
AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.
Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?
Or is it, perhaps, both?
Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.
Bite-Size AI Content for Faculty and Staff— from aiedusimplified.substack.com by Lance Eaton Another two 5-tips videos for faculty and my latest use case: creating FAQs!
Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?
I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?
California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.
California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.
Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.
Per The Neuron
Some cool use cases:
Judy Garland can teach you biology while walking to class.
James Dean can narrate your steamy romance novel.
Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!
Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.
How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.
As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.
… These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”
Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.
Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.
And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
From DSC: Very interesting to see the mention of an R&D department here! Very cool.
Baker said ninth graders inthe R&D department designed the essential skills rubric for their grade so that regardless of what content classes students take, they all get the same immersion into critical career skills. Student voice is now so integrated into Edison’s core that teachers work with student designers to plan their units. And he said teachers are becoming comfortable with the language of career-centered learning and essential skills while students appreciate the engagement and develop a new level of confidence.
… The R&D department has grown to include teachers from every department working with students to figure out how to integrate essential skills into core academic classes. In this way, they’re applying one of the XQ Institute’s crucial Design Principles for innovative high schools: Youth Voice and Choice. .
Client-connected projects have become a focal point of the Real World Learning initiative, offering students opportunities to solve real-world problems in collaboration with industry professionals.
Organizations like CAPS, NFTE, and Journalistic Learning facilitate community connections and professional learning opportunities, making it easier to implement client projects and entrepreneurship education.
Important trend: client projects. Work-based learning has been growing with career academies and renewed interest in CTE. Six years ago, a subset of WBL called client-connected projects became a focal point of the Real World Learning initiative in Kansas City where they are defined as authentic problems that students solve in collaboration with professionals from industry, not-for-profit, and community-based organizations….and allow students to: engage directly with employers, address real-world problems, and develop essential skills.
The Community Portrait approach encourages diverse voices to shape the future of education, ensuring it reflects the needs and aspirations of all stakeholders.
Active, representative community engagement is essential for creating meaningful and inclusive educational environments.
The Portrait of a Graduate—a collaborative effort to define what learners should know and be able to do upon graduation—has likely generated enthusiasm in your community. However, the challenge of future-ready graduates persists: How can we turn this vision into a reality within our diverse and dynamic schools, especially amid the current national political tensions and contentious curriculum debates?
The answer lies in active, inclusive community engagement. It’s about crafting a Community Portrait that reflects the rich diversity of our neighborhoods. This approach, grounded in the same principles used to design effective learning systems, seeks to cultivate deep, reciprocal relationships within the community. When young people are actively involved, the potential for meaningful change increases exponentially.
Although Lindsay E. Jones came from a family of educators, she didn’t expect that going to law school would steer her back into the family business. Over the years she became a staunch advocate for children with disabilities. And as mom to a son with learning disabilities and ADHD who is in high school and doing great, her advocacy is personal.
Jones previously served as president and CEO of the National Center for Learning Disabilities and was senior director for policy and advocacy at the Council for Exceptional Children. Today, she is the CEO at CAST, an organization focused on creating inclusive learning environments in K–12. EdTech: Focus on K–12 spoke with Jones about how digital transformation, artificial intelligence and visionary leaders can support inclusive learning environments.
Our brains are all as different as our fingerprints, and throughout its 40-year history, CAST has been focused on one core value: People are not broken, systems are poorly designed. And those systems are creating a barrier that holds back human innovation and learning.
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects.
Users can create drum beats, instrument riffs, ambient sounds, foley and production elements.
The model enables audio variations and style transfer of audio samples.
Some comments from Rundown AI:
Why it matters: While the AI advances in text-to-image models have been the most visible (literally), both video and audio are about to take the same leap. Putting these tools in the hands of creatives will redefine traditional workflows — from musicians brainstorming new beats to directors crafting sound effects for film and TV.
If 2023 was the year the world discovered generative AI (gen AI), 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year, with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.
Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value. .
.
What’s the future of AI?— from mckinsey.com AI is here to stay. To outcompete in the future, organizations and individuals alike need to get familiar fast. This series of McKinsey Explainers dives deep into the seven technologies that are already shaping the years to come.
We’re in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We don’t know exactly what the future will look like. But we do know that these seven technologies will play a big role. .
?ANNOUNCING SHOWRUNNER?
We believe the future is a mix of game & movie.
Simulations powering 1000s of Truman Shows populated by interactive AI characters.
The new Canva Canva announced “a whole new Canva” to improve workplace collaborative creation and a revamped platform to simplify its tools for anyone to use.
At Canva Create, several AI features were announced that enhance the design and content creation process:
Magic Design: Upload an image and select a style to get a curated selection of personalized templates.
Magic Write: An AI-powered copywriting assistant that can generate written content from a text prompt, useful for presentations and website copy.
Magic Eraser: This feature can remove unwanted objects or backgrounds from images.
Magic Edit: Users can swap an object with something else entirely using generative AI.
Beat Sync: Automatically matches video footage to a soundtrack of your choice.
Translate: Automatically translates text in designs to over 100 different languages.
Things might get more interesting in business settings as AI companies start deploying so-called “AI agents,” which can take action by operating other software on a computer or via the internet.
Anthropic, a competitor to OpenAI, announced a major new product today that attempts to prove the thesis that tool use is needed for AI’s next leap in usefulness.
AI Film Festival | AI comes to filmmaking — from Bloomberg
This week Runway AI Inc., which makes AI video generating and editing tools, held its second annual AI Film Festival in Los Angeles — its first stop before heading to New York next week. To give a sense for how much the event has grown since last year, Runway co-founder CristóbalValenzuela said last year people submitted 300 videos for festival consideration. This year they sent in 3,000.
A crowd of hundreds of filmmakers, techies, artists, venture capitalists and at least one well-known actor (Poker Face star Natasha Lyonne) gathered at the Orpheum Theatre in downtown LA Wednesday night to view the 10 finalists chosen by the festival’s judges.