Picture your enterprise as a living ecosystem,where surging market demand instantly informs staffing decisions, where a new vendor’s onboarding optimizes your emissions metrics, where rising customer engagement reveals product opportunities. Now imagine if your systems could see these connections too! This is the promise of AI agents — an intelligent network that thinks, learns, and works across your entire enterprise.
Today, organizations operate in artificial silos. Tomorrow, they could be fluid and responsive. The transformation has already begun. The question is: will your company lead it?
The journey to agent-enabled operations starts with clarity on business objectives. Leaders should begin by mapping their business’s critical processes. The most pressing opportunities often lie where cross-functional handoffs create friction or where high-value activities are slowed by system fragmentation. These pain points become the natural starting points for your agent deployment strategy.
Artificial intelligence has already proved that it can sound like a human, impersonate individuals and even produce recordings of someone speaking different languages. Now, a new feature from Microsoft will allow video meeting attendees to hear speakers “talk” in a different language with help from AI.
What Is Agentic AI? — from blogs.nvidia.com by Erik Pounds Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems.
The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And it’s set to enhance productivity and operations across industries.
Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.
Robot “Jailbreaks” In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.
Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.
“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”
…
The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.
In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.
The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.
By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.
The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.
Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.
Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.
Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.
In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.
Addendum on 12/5/24:
ChatGPT turns two: how has it impacted markets? — from moneyweek.com Two years on from ChatGPT’s explosive launch into the public sphere, we assess the impact that it has had on stock markets and the world of technology
If you’re a teen, you could be exposed to conspiracy theories and a host of other pieces of misinformation as frequently as every day while scrolling through your social media feeds.
That’s according to a new study by the News Literacy Project, which also found that teens struggle with identifying false information online. This comes at a time when media literacy education isn’t available to most students, the report finds, and their ability to distinguish between objective and biased information sources is weak. The findings are based on responses from more than 1,000 teens ages 13 to 18.
“News literacy is fundamental to preparing students to become active, critically thinking members of our civic life — which should be one of the primary goals of a public education,” Kim Bowman, News Literacy Project senior research manager and author of the report, said in an email interview. “If we don’t teach young people the skills they need to evaluate information, they will be left at a civic and personal disadvantage their entire lives. News literacy instruction is as important as core subjects like reading and math.”
To help teach your students about news and media literacy, I highly recommend my sister Sue Ellen Christian’s work out atWonder Media.
.
.
.
There you will find numerous resources for libraries, schools, families and individuals. Suggestions of books, articles, other websites, and online materials to assist you in growing your media literacy and news media literacy are also included there.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises
Add sound to your video via text — Project Super Sonic:
New Dream Weaver — from aisecret.us Explore Adobe’s New Firefly Video Generative Model
Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.
AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds. Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Runway Partners with Lionsgate — from runwayml.com via The Rundown AI Runway and Lionsgate are partnering to explore the use of AI in film production.
Lionsgate and Runway have entered into a first-of-its-kind partnership centered around the creation and training of a new AI model, customized on Lionsgate’s proprietary catalog. Fundamentally designed to help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work, the model generates cinematic video that can be further iterated using Runway’s suite of controllable tools.
Per The Rundown:Lionsgate, the film company behind The Hunger Games, John Wick, and Saw, teamed up with AI video generation company Runway to create a custom AI model trained on Lionsgate’s film catalogue.
The details:
The partnership will develop an AI model specifically trained on Lionsgate’s proprietary content library, designed to generate cinematic video that filmmakers can further manipulate using Runway’s tools.
Lionsgate sees AI as a tool to augment and enhance its current operations, streamlining both pre-production and post-production processes.
Runway is considering ways to offer similar custom-trained models as templates for individual creators, expanding access to AI-powered filmmaking tools beyond major studios.
Why it matters: As many writers, actors, and filmmakers strike against ChatGPT, Lionsgate is diving head-first into the world of generative AI through its partnership with Runway. This is one of the first major collabs between an AI startup and a major Hollywood company — and its success or failure could set precedent for years to come.
Each prompt on ChatGPT flows through a server that runs thousands of calculations to determine the best words to use in a response.
In completing those calculations, these servers, typically housed in data centers, generate heat. Often, water systems are used to cool the equipment and keep it functioning. Water transports the heat generated in the data centers into cooling towers to help it escape the building, similar to how the human body uses sweat to keep cool, according to Shaolei Ren, an associate professor at UC Riverside.
Where electricity is cheaper, or water comparatively scarce, electricity is often used to cool these warehouses with large units resembling air-conditioners, he said. That means the amount of water andelectricity an individual query requires can depend on a data center’s location and vary widely.
AI, Humans and Work: 10 Thoughts.— from rishad.substack.com by Rishad Tobaccowala The Future Does Not Fit in the Containers of the Past. Edition 215.
10 thoughts about AI, Humans and Work in 10 minutes:
AI is still Under-hyped.
AI itself will be like electricity and is unlikely to be a differentiator for most firms.
AI is not alive but can be thought of as a new species.
Knowledge will be free and every knowledge workers job will change in 2025.
The key about AI is not to ask what AI will do to us but what AI can do for us.
This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.
My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?
Here’s where we are in September, 2024:
…
Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby, Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)
As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.
AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.
Right now, high schoolers and college students around the country are experimenting with free smartphone apps that help complete their math homework using generative AI. One of the most popular options on campus right now is the Gauth app, with millions of downloads. It’s owned by ByteDance, which is also TikTok’s parent company.
The Gauth app first launched in 2019 with a primary focus on mathematics, but soon expanded to other subjects as well, like chemistry and physics. It’s grown in relevance, and neared the top of smartphone download lists earlier this year for the education category. Students seem to love it. With hundreds of thousands of primarily positive reviews, Gauth has a favorable 4.8 star rating in the Apple App Store and Google Play Store.
All students have to do after downloading the app is point their smartphone at a homework problem, printed or handwritten, and then make sure any relevant information is inside of the image crop. Then Gauth’s AI model generates a step-by-step guide, often with the correct answer.
From DSC: I do hesitate to post this though, as I’ve seen numerous posting re: the dubious quality of AI as it relates to giving correct answers to math-related problems – or whether using AI-based tools help or hurt the learning process. The situation seems to be getting better, but as I understand it, we still have some progress to make in this area of mathematics.
Educational leaders must reconsider the definition of creativity, taking into account how generative AI tools can be used to produce novel and impactful creative work, similar to how film editors compile various elements into a cohesive, creative whole.
Generative AI democratizes innovation by allowing all students to become creators, expanding access to creative processes that were previously limited and fostering a broader inclusion of diverse talents and ideas in education.
AI-Powered Instructional Design at ASU — from drphilippahardman.substack.com by Dr. Philippa Hardman How ASU’s Collaboration with OpenAI is Reshaping the Role of Instructional Designers
The developments and experiments at ASU provide a fascinating window into two things:
How the world is reimagining learning in the age of AI;
How the role of the instructional designer is changing in the age of AI.
In this week’s blog post, I’ll provide a summary of how faculty, staff and students at ASU are starting to reimagine education in the age of AI, and explore what this means for the instructions designers who work there.
India’s ed-tech unicorn PhysicsWallah is using OpenAI’s GPT-4o to make education accessible to millions of students in India. Recently, the company launched a suite of AI products to ensure that students in Tier 2 & 3 cities can access high-quality education without depending solely on their enrolled institutions, as 85% of their enrollment comes from these areas.
Last year, AIM broke the news of PhysicsWallah introducing ‘Alakh AI’, its suite of generative AI tools, which was eventually launched at the end of December 2023. It quickly gained traction, amassing over 1.5 million users within two months of its release.
AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.
Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?
Or is it, perhaps, both?
Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.
Bite-Size AI Content for Faculty and Staff— from aiedusimplified.substack.com by Lance Eaton Another two 5-tips videos for faculty and my latest use case: creating FAQs!
Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?
I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?
California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.
California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.
Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.
Per The Neuron
Some cool use cases:
Judy Garland can teach you biology while walking to class.
James Dean can narrate your steamy romance novel.
Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!
Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.
How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.
As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.
… These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”
Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.
Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.
And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
From DSC: Very interesting to see the mention of an R&D department here! Very cool.
Baker said ninth graders inthe R&D department designed the essential skills rubric for their grade so that regardless of what content classes students take, they all get the same immersion into critical career skills. Student voice is now so integrated into Edison’s core that teachers work with student designers to plan their units. And he said teachers are becoming comfortable with the language of career-centered learning and essential skills while students appreciate the engagement and develop a new level of confidence.
… The R&D department has grown to include teachers from every department working with students to figure out how to integrate essential skills into core academic classes. In this way, they’re applying one of the XQ Institute’s crucial Design Principles for innovative high schools: Youth Voice and Choice. .
Client-connected projects have become a focal point of the Real World Learning initiative, offering students opportunities to solve real-world problems in collaboration with industry professionals.
Organizations like CAPS, NFTE, and Journalistic Learning facilitate community connections and professional learning opportunities, making it easier to implement client projects and entrepreneurship education.
Important trend: client projects. Work-based learning has been growing with career academies and renewed interest in CTE. Six years ago, a subset of WBL called client-connected projects became a focal point of the Real World Learning initiative in Kansas City where they are defined as authentic problems that students solve in collaboration with professionals from industry, not-for-profit, and community-based organizations….and allow students to: engage directly with employers, address real-world problems, and develop essential skills.
The Community Portrait approach encourages diverse voices to shape the future of education, ensuring it reflects the needs and aspirations of all stakeholders.
Active, representative community engagement is essential for creating meaningful and inclusive educational environments.
The Portrait of a Graduate—a collaborative effort to define what learners should know and be able to do upon graduation—has likely generated enthusiasm in your community. However, the challenge of future-ready graduates persists: How can we turn this vision into a reality within our diverse and dynamic schools, especially amid the current national political tensions and contentious curriculum debates?
The answer lies in active, inclusive community engagement. It’s about crafting a Community Portrait that reflects the rich diversity of our neighborhoods. This approach, grounded in the same principles used to design effective learning systems, seeks to cultivate deep, reciprocal relationships within the community. When young people are actively involved, the potential for meaningful change increases exponentially.
Although Lindsay E. Jones came from a family of educators, she didn’t expect that going to law school would steer her back into the family business. Over the years she became a staunch advocate for children with disabilities. And as mom to a son with learning disabilities and ADHD who is in high school and doing great, her advocacy is personal.
Jones previously served as president and CEO of the National Center for Learning Disabilities and was senior director for policy and advocacy at the Council for Exceptional Children. Today, she is the CEO at CAST, an organization focused on creating inclusive learning environments in K–12. EdTech: Focus on K–12 spoke with Jones about how digital transformation, artificial intelligence and visionary leaders can support inclusive learning environments.
Our brains are all as different as our fingerprints, and throughout its 40-year history, CAST has been focused on one core value: People are not broken, systems are poorly designed. And those systems are creating a barrier that holds back human innovation and learning.
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects.
Users can create drum beats, instrument riffs, ambient sounds, foley and production elements.
The model enables audio variations and style transfer of audio samples.
Some comments from Rundown AI:
Why it matters: While the AI advances in text-to-image models have been the most visible (literally), both video and audio are about to take the same leap. Putting these tools in the hands of creatives will redefine traditional workflows — from musicians brainstorming new beats to directors crafting sound effects for film and TV.