Today, I’m excited to share with you all the fruit of our effort at @OpenAI to create AI models capable of truly general reasoning: OpenAI’s new o1 model series! (aka ?) Let me explain ? 1/ pic.twitter.com/aVGAkb9kxV
We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.
The wait is over. OpenAI has just released GPT-5, now called OpenAI o1.
It brings advanced reasoning capabilities and can generate entire video games from a single prompt.
Think of it as ChatGPT evolving from fast, intuitive thinking (System-1) to deeper, more deliberate… pic.twitter.com/uAMihaUjol
OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there’re only 2 techniques that scale indefinitely with compute: learning & search. It’s time to shift focus to… pic.twitter.com/jTViQucwxr
The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.
To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.
What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack
The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.
Recently, many creators (myself included) have been exploring super realistic AI more and more.
But where can this actually be used?
Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.
Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.
Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.
Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.
We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.
86% of students globally are regularly using AI in their studies, with 54% of them using AI on a weekly basis, the recent Digital Education Council Global AI Student Survey found.
ChatGPT was found to be the most widely used AI tool, with 66% of students using it, and over 2 in 3 students reported using AI for information searching.
Despite their high rates of AI usage, 1 in 2 students do not feel AI ready. 58% reported that they do not feel that they had sufficient AI knowledge and skills, and 48% do not feel adequately prepared for an AI-enabled workplace.
The Post-AI Instructional Designer— from drphilippahardman.substack.com by Dr. Philippa Hardman How the ID role is changing, and what this means for your key skills, roles & responsibilities
Specifically, the study revealed that teachers who reported most productivity gains were those who used AI not just for creating outputs (like quizzes or worksheets) but also for seeking input on their ideas, decisions and strategies.
Those who engaged with AI as a thought partner throughout their workflow, using it to generate ideas, define problems, refine approaches, develop strategies and gain confidence in their decisions gained significantly more from their collaboration with AI than those who only delegated functional tasks to AI.
Leveraging Generative AI for Inclusive Excellence in Higher Education — from er.educause.edu by Lorna Gonzalez, Kristi O’Neil-Gonzalez, Megan Eberhardt-Alstot, Michael McGarry and Georgia Van Tyne Drawing from three lenses of inclusion, this article considers how to leverage generative AI as part of a constellation of mission-centered inclusive practices in higher education.
The hype and hesitation about generative artificial intelligence (AI) diffusion have led some colleges and universities to take a wait-and-see approach.Footnote1 However, AI integration does not need to be an either/or proposition where its use is either embraced or restricted or its adoption aimed at replacing or outright rejecting existing institutional functions and practices. Educators, educational leaders, and others considering academic applications for emerging technologies should consider ways in which generative AI can complement or augment mission-focused practices, such as those aimed at accessibility, diversity, equity, and inclusion. Drawing from three lenses of inclusion—accessibility, identity, and epistemology—this article offers practical suggestions and considerations that educators can deploy now. It also presents an imperative for higher education leaders to partner toward an infrastructure that enables inclusive practices in light of AI diffusion.
An example way to leverage AI:
How to Leverage AI for Identity Inclusion Educators can use the following strategies to intentionally design instructional content with identity inclusion in mind.
Provide a GPT or AI assistant with upcoming lesson content (e.g., lecture materials or assignment instructions) and ask it to provide feedback (e.g., troublesome vocabulary, difficult concepts, or complementary activities) from certain perspectives. Begin with a single perspective (e.g., first-time, first-year student), but layer in more to build complexity as you interact with the GPT output.
Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.
To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.
Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development.Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.
Change blindness — from oneusefulthing.org by Ethan Mollick 21 months later
I don’t think anyone is completely certain about where AI is going, but we do know that things have changed very quickly, as the examples in this post have hopefully demonstrated. If this rate of change continues, the world will look very different in another 21 months. The only way to know is to live through it.
Over the subsequent weeks, I’ve made other adjustments, but that first one was the one I asked myself:
What are you doing?
Why are you doing it that way?
How could you change that workflow with AI?
Applying the AI to the workflow, then asking, “Is this what I was aiming for? How can I improve the prompt to get closer?”
Documenting what worked (or didn’t). Re-doing the work with AI to see what happened, and asking again, “Did this work?”
So, something that took me WEEKS of hard work, and in some cases I found impossible, was made easy. Like, instead of weeks, it takes 10 minutes. The hard part? Building the prompt to do what I want, fine-tuning it to get the result. But that doesn’t take as long now.
AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.
Are U.S. public schools lagging behind other countries like Singapore and South Korea in preparing teachers and students for the boom of generative artificial intelligence? Or are our educators bumbling into AI half-blind, putting students’ learning at risk?
Or is it, perhaps, both?
Two new reports, coincidentally released on the same day last week, offer markedly different visions of the emerging field: One argues that schools need forward-thinking policies for equitable distribution of AI across urban, suburban and rural communities. The other suggests they need something more basic: a bracing primer on what AI is and isn’t, what it’s good for and how it can all go horribly wrong.
Bite-Size AI Content for Faculty and Staff— from aiedusimplified.substack.com by Lance Eaton Another two 5-tips videos for faculty and my latest use case: creating FAQs!
Despite possible drawbacks, an exciting wondering has been—What if AI was a tipping point helping us finally move away from a standardized, grade-locked, ranking-forced, batched-processing learning model based on the make believe idea of “the average man” to a learning model that meets every child where they are at and helps them grow from there?
I get that change is indescribably hard and there are risks. But the integration of AI in education isn’t a trend. It’s a paradigm shift that requires careful consideration, ongoing reflection, and a commitment to one’s core values. AI presents us with an opportunity—possibly an unprecedented one—to transform teaching and learning, making it more personalized, efficient, and impactful. How might we seize the opportunity boldly?
California and NVIDIA Partner to Bring AI to Schools, Workplaces — from govtech.com by Abby Sourwine The latest step in Gov. Gavin Newsom’s plans to integrate AI into public operations across California is a partnership with NVIDIA intended to tailor college courses and professional development to industry needs.
California Gov. Gavin Newsom and tech company NVIDIA joined forces last week to bring generative AI (GenAI) to community colleges and public agencies across the state. The California Community Colleges Chancellor’s Office (CCCCO), NVIDIA and the governor all signed a memorandum of understanding (MOU) outlining how each partner can contribute to education and workforce development, with the goal of driving innovation across industries and boosting their economic growth.
Listen to anything on the go with the highest-quality voices — from elevenlabs.io; via The Neuron
The ElevenLabs Reader App narrates articles, PDFs, ePubs, newsletters, or any other text content. Simply choose a voice from our expansive library, upload your content, and listen on the go.
Per The Neuron
Some cool use cases:
Judy Garland can teach you biology while walking to class.
James Dean can narrate your steamy romance novel.
Sir Laurence Olivier can read you today’s newsletter—just paste the web link and enjoy!
Why it’s important: ElevenLabs shared how major Youtubers are using its dubbing services to expand their content into new regions with voices that actually sound like them (thanks to ElevenLabs’ ability to clone voices).
Oh, and BTW, it’s estimated that up to 20% of the population may have dyslexia. So providing people an option to listen to (instead of read) content, in their own language, wherever they go online can only help increase engagement and communication.
How Generative AI Improves Parent Engagement in K–12 Schools — from edtechmagazine.com by Alexadner Slagg With its ability to automate and personalize communication, generative artificial intelligence is the ideal technological fix for strengthening parent involvement in students’ education.
As generative AI tools populate the education marketplace, the technology’s ability to automate complex, labor-intensive tasks and efficiently personalize communication may finally offer overwhelmed teachers a way to effectively improve parent engagement.
… These personalized engagement activities for students and their families can include local events, certification classes and recommendations for books and videos. “Family Feed might suggest courses, such as an Adobe certification,” explains Jackson. “We have over 14,000 courses that we have vetted and can recommend. And we have books and video recommendations for students as well.”
Including personalized student information and an engagement opportunity makes it much easier for parents to directly participate in their children’s education.
Will AI Shrink Disparities in Schools, or Widen Them? — edsurge.com by Daniel Mollenkamp Experts predict new tools could boost teaching efficiency — or create an “underclass of students” taught largely through screens.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
Microsoft’s new ChatGPT competitor… — from The Rundown AI
The Rundown: Microsoft is reportedly developing a massive 500B parameter in-house LLM called MAI-1, aiming to compete with top AI models from OpenAI, Anthropic, and Google.
Hampton runs a private community for high-growth tech founders and CEOs. We asked our community of founders and owners how AI has impacted their business and what tools they use
Here’s a sneak peek of what’s inside:
The budgets they set aside for AI research and development
The most common (and obscure) tools founders are using
Measurable business impacts founders have seen through using AI
Where they are purposefully not using AI and much more
To help leaders and organizations overcome AI inertia, Microsoft and LinkedIn looked at how AI will reshape work and the labor market broadly, surveying 31,000 people across 31 countries, identifying labor and hiring trends from LinkedIn, and analyzing trillions of Microsoft 365 productivity signals as well as research with Fortune 500 customers. The data points to insights every leader and professional needs to know—and actions they can take—when it comes to AI’s implications for work.
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC: This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
Having elementary students make their own videos instead of consuming content made by someone else sounds like a highly engaging educational experience. But if you’ve ever tried to get 25 third graders to use a video editing software platform that they’ve never seen before, it can get really frustrating really fast. It’s easy for the lesson to become entirely centered around how to use the software without any subject-area content learning.
Through years of trial and error with K–6 students, I’ve developed three guiding concepts for elementary video projects so that teachers and students have a good experience.
Like actors, students are often tasked with memorization. Although education has evolved to incorporate project-based learning and guided play, there’s no getting around the necessity of knowing the multiplication tables, capital cities, and correct spelling.
The following are movement-based games that build students’ abilities to retain spelling words specifically. Ideally, these exercises support them academically as well as socially. Research shows that learning through play promotes listening, focus, empathy, and self-awareness—benefits that build students’ social and emotional learning skills.
Financial and life skills uncertainty: One-third of recent graduates don’t believe they have or are unsure they have the financial and core life skills needed to succeed in the world.
Appetite for non-academic courses: 68% of recent graduates think non-academically focused courses in formal education settings would better prepare students for the real world. This belief is especially strong among respondents that attended public schools and colleges (71%).
Automotive maintenance skills are stalled: More than any other skill, nearly one in five recent graduates say they are the least confident in handling automotive maintenance, such as changing a tire or their oil. This is followed by financial planning (17%), insurance (12%), minor home repairs (11%), cooking (11%), cleaning (8%) and organizing (8%).
Financial planning woes: A majority (79%) of recent graduates said financial planning overwhelms them the most – and of all the life skills highlighted in the survey, 29% of respondents said it negatively impacts their mental health.
Social media as a learning tool: Social media is helping fill the skills gap, with 33% of recent graduates turning to it for life skills knowledge.
From DSC: Our son would agree with many of these findings. He would like to have learned things like how to do/file his taxes, learn more about healthcare insurance, and similar real-world/highly-applicable types of knowledge. Those involved with K12 curriculum decisions, please take a serious look at this feedback and make the necessary changes/additions.
Integrating technical skills into the high school curriculum can inspire and prepare students for diverse roles. This approach is key to fostering equity and inclusivity in the job market.
By forging partnerships with community colleges and technical schools, high schools can democratize access to education and ensure students from all backgrounds have equal opportunities for success in technical fields.
High schools can expand career possibilities by providing apprenticeships as viable and lucrative alternatives to traditional four-year degrees.
How Much Do Voice Actors Make? — from elevenlabs.io Learn how much voice actors can expect to make and how to create passive income streams with ElevenLabs.
If you’re considering a career in the voice acting industry, you may be wondering how much do voice actors make?
A voice actor’s salary is based on many factors, from talent to type of voice work, and the ability to market yourself. Voice actors can experience massive earning potential, and a voice actor salary can range from tens of thousands of dollars to six figures a year.
In this article, we’ll explore how to make your voice talent work for you, whether you’re an entry-level voice actor or an experienced voice actor, the kind of voice actor’s salary you can expect, and what the highest-paid voice actors earn.
Have you ever wondered how video games create those immersive and dynamic sound effects that react to your every move? From the satisfying crunch of footsteps on different surfaces to the realistic reverberations of gunshots in various environments, game audio has come a long way.
Now, AI is revolutionizing the way video game audio is produced and experienced. AI algorithms and machine learning techniques are being leveraged to power real-time sound effect generation, creating more realistic, adaptive, and efficient sound effects that respond to player actions and in-game events in real-time. For example, ElevenLabs’ upcoming AI Sound Effects feature will allow video game developers to describe a sound and then generate it with AI.
What Are the Best AI Video Game Tools? Looking to enhance your video generation process with AI tools? You’ve come to the right place. Learn all about the top tools and their specific use cases.
From generating realistic assets and environments to crafting compelling narratives and lifelike characters, AI is revolutionizing the way video games are designed and developed.
In this article, we will explore the different types of AI video game tools available and highlight some of the best tools in each category. We’ll delve into the key features and benefits of these tools, helping you understand how they can streamline your game development process and enhance the overall quality of your game.
Whether you’re an indie developer or part of a large studio, understanding the AI landscape and selecting the right tools for your project is crucial. We’ll provide insights into what to look for when choosing an AI video game tool, ensuring that you make an informed decision that aligns with your project’s requirements and budget.
Tools and Apps to Bring Augmented Reality into Your Classroom — from techlearning.com by Steve Baule and Dillon Martinez These digital tools and platforms can support the use of augmented reality in the classroom, making a more dynamic and engaging learning experience
AR allows virtual 3D models, animations, and contextual information to be overlaid on the real world through mobile devices or AR headsets. The Franklin Institute provides a good overview of what constitutes AR, as does UK’s Talk Business and Tech & Learning. This immersive technology provides unique opportunities for interactive, experiential learning across numerous subjects.
For example, in a science class, students could use an AR app to visualize the 3D structure of a molecule they are studying and interact with it by rotating, resizing, or even building it atom-by-atom. For history lessons, AR can transport students to ancient archaeological sites projected on their desks, where they can explore 3D reconstructions of ruins and artifacts. Google’s Expeditions tool can allow students to take a virtual walkthrough South Africa and learn about its geography or visit the Seven New Wonders of the World.
London's Frameless is the ultimate immersive art experience. With 42 masterpieces in 4 different galleries, it's the largest permanent multi-sensory experience in the UK.pic.twitter.com/13OPRCLH2E
What about course videos? Professors can create them (by lecturing into a camera for several hours hopefully in different clothes) from the readings, from their interpretations of the readings, from their own case experiences – from anything they like. But now professors can direct the creation of the videos by talking – actually describing – to a CustomGPTabout what they’d like the video to communicate with their or another image. Wait. What?They can make a video by talking to a CustomGPT and even select the image they want the “actor” to use? Yes. They can also add a British accent and insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This means that a professor can specify how long the video should be, what sources should be consulted and describe the demeanor the professor wants the video to project.
From DSC: Though I wasn’t crazy about the clickbait type of title here, I still thought that the article was solid and thought-provoking. It contained several good ideas for using AI.
Excerpt from a recent EdSurge Higher Ed newsletter:
There are darker metaphors though — ones that focus on the hazards for humanity of the tech. Some professors worry that AI bots are simply replacing hired essay-writers for many students, doing work for a student that they can then pass off as their own (and doing it for free).
From DSC: Hmmm…the use of essay writers was around long before AI became mainstream within higher education. So we already had a serious problem where students didn’t see the why in what they were being asked to do. Some students still aren’t sold on the why of the work in the first place. The situation seems to involve ethics, yes, but it also seems to say that we haven’t sold students on the benefits of putting in the work. Students seem to be saying I don’t care about this stuff…I just need the degree so I can exit stage left.
My main point: The issue didn’t start with AI…it started long before that.
This financial stagnation is occurring as we face a multitude of escalating challenges. These challenges include but are in no way limited to, chronic absenteeism, widespread student mental health issues, critical staff shortages, rampant classroom behavior issues, a palpable sense of apathy for education in students, and even, I dare say, hatred towards education among parents and policymakers.
…
Our current focus is on keeping our heads above water, ensuring our students’ safety and mental well-being, and simply keeping our schools staffed and our doors open.
What is Ed? An easy-to-understand learning platform designed by Los Angeles Unified to increase student achievement. It offers personalized guidance and resources to students and families 24/7 in over 100 languages.
Also relevant/see:
Los Angeles Unified Bets Big on ‘Ed,’ an AI Tool for Students — from by Lauraine Langreo
The Los Angeles Unified School District has launched an AI-powered learning tool that will serve as a “personal assistant” to students and their parents.The tool, named “Ed,” can provide students from the nation’s second-largest district information about their grades, attendance, upcoming tests, and suggested resources to help them improve their academic skills on their own time, Superintendent Alberto Carvalho announced March 20. Students can also use the app to find social-emotional-learning resources, see what’s for lunch, and determine when their bus will arrive.
Could OpenAI’s Sora be a big deal for elementary school kids?— from futureofbeinghuman.com by Andrew Maynard Despite all the challenges it comes with, AI-generated video could unleash the creativity of young children and provide insights into their inner worlds – if it’s developed and used responsibly
Like many others, I’m concerned about the challenges that come with hyper-realistic AI-generated video. From deep fakes and disinformation to blurring the lines between fact and fiction, generative AI video is calling into question what we can trust, and what we cannot.
And yet despite all the issues the technology is raising, it also holds quite incredible potential, including as a learning and development tool — as long as we develop and use it responsibly.
I was reminded of this a few days back while watching the latest videos from OpenAI created by their AI video engine Sora — including the one below generated from the prompt “an elephant made of leaves running in the jungle”
…
What struck me while watching this — perhaps more than any of the other videos OpenAI has been posting on its TikTok channel — is the potential Sora has for translating the incredibly creative but often hard to articulate ideas someone may have in their head, into something others can experience.
Can AI Aid the Early Education Workforce? — from edsurge.com by Emily Tate Sullivan During a panel at SXSW EDU 2024, early education leaders discussed the potential of AI to support and empower the adults who help our nation’s youngest children.
While the vast majority of the conversations about AI in education have centered on K-12 and higher education, few have considered the potential of this innovation in early care and education settings.
At the conference, a panel of early education leaders gathered to do just that, in a session exploring the potential of AI to support and empower the adults who help our nation’s youngest children, titled, “ChatECE: How AI Could Aid the Early Educator Workforce.”
Hau shared that K-12 educators are using the technology to improve efficiency in a number of ways, including to draft individualized education programs (IEPs), create templates for communicating with parents and administrators, and in some cases, to support building lesson plans.
Educators are, perhaps rightfully so, cautious about incorporating AI in their classrooms. With thoughtful implementation, however, AI image generators, with their ability to use any language, can provide powerful ways for students to engage with the target language and increase their proficiency.
While AI offers numerous benefits, it’s crucial to remember that it is a tool to empower educators, not replace them. The human connection between teacher and student remains central to fostering creativity, critical thinking, and social-emotional development. The role of teachers will shift towards becoming facilitators, curators, and mentors who guide students through personalized learning journeys. By harnessing the power of AI, educators can create dynamic and effective classrooms that cater to each student’s individual needs. This paves the way for a more engaging and enriching learning experience that empowers students to thrive.
In this article, seven teachers across the world share their insights on AI tools for educators. You will hear a host of varied opinions and perspectives on everything from whether AI could hasten the decline of learning foreign languages to whether AI-generated lesson plans are an infringement on teachers’ rights. A common theme emerged from those we spoke with: just as the internet changed education, AI tools are here to stay, and it is prudent for teachers to adapt.
Even though it’s been more than a year since ChatGPT made a big splash in the K-12 world, many teachers say they are still not receiving any training on using artificial intelligence tools in the classroom.
More than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.
From DSC: This article mentioned the following resource:
Below are some items for those creatives who might be interested in telling stories, designing games, crafting audio-based experiences, composing music, developing new worlds using 3D graphics, and more.
The sounds of any game can make or break the experience for its players. Many of our favorite adventures come roaring back into our minds when we hear a familiar melody, or maybe it’s a special sound effect that reminds us of our time performing a particularly heroic feat… or the time we just caused some havoc with friends. With Lightfall sending Guardians to explore the new destination of Neomuna, there’s an entire universe hidden away within the sounds—both orchestral and diegetic—for Guardians to uncover and immerse themselves in. We recently assembled some of Destiny’s finest sound designers and composers to dive a little bit deeper into the stunning depths of Neomuna’s auditory experience.
Before diving into the interview with our incredible team, we wanted to make sure you have seen the Lightfall music documentary that went out shortly after the expansion’s release. This short video is a great introduction to how our team worked to create the music of Lightfall and is a must-see for audiophiles and Destiny fans alike.
Every game has a story to tell, a journey to take players through that — if done well — can inspire wonderful memories that last a lifetime. Unlike other storytelling mediums, the art of video games is an intricate interweaving of experiences, including psychological cues that are designed to entrance players and make them feel like they’re a part of the story. One way this is achieved is through the art of audio. And no, we aren’t just talking about the many incredible soundtracks out there, we’re talking about the oftentimes overlooked universe of audio design.
… What does an audio designer do?
“Number one? We don’t work on music. That’s a thing almost everyone thinks every audio designer does,” jokes Nyte when opening up about beginning her quest into the audio world. “That, or for a game like Destiny, people just assume we only work on weapon sounds and nothing else. Which, [Juan] Uribe does, but a lot of us don’t. There is this entire gamut of other sounds that are in-game that people don’t really notice. Some do, and that’s always cool, but audio is about all sounds coming together for a ‘whole’ audio experience.”
On the Transformation of Entertainment
What company will be the Pixar of the AI era? What talent agency will be the CAA of the AI era? How fast can the entertainment industry evolve to natively leverage AI, and what parts will be disrupted by the industry’s own ambivalence? Or are all of these questions myopic…and should we anticipate a wave of entirely new categories of entertainment?
We are starting to see material adoption of AI tools across many industries, including media and entertainment. No doubt, these tools will transform the processes behind generating content. But what entirely new genres of content might emerge? The platform shift to AI-based workflows might give rise to entirely new types of companies that transform entertainment as we know it – from actor representation, Hollywood economics, consumption devices and experiences, to the actual mediums of entertainment themselves. Let’s explore just a few of the more edgy implications:
Dave told me that he couldn’t have made Borrowing Time without AI—it’s an expensive project that traditional Hollywood studios would never bankroll. But after Dave’s short went viral, major production houses approached him to make it a full-length movie. I think this is an excellent example of how AI is changing the art of filmmaking, and I came out of this interview convinced that we are on the brink of a new creative age.
We dive deep into the world of AI tools for image and video generation, discussing how aspiring filmmakers can use them to validate their ideas, and potentially even secure funding if they get traction. Dave walks me through how he has integrated AI into his movie-making process, and as we talk, we make a short film featuring Nicolas Cage using a haunted roulette ball to resurrect his dead movie career, live on the show.
SAN JOSE, Calif. – [On 2/20/23], Adobe (Nasdaq:ADBE) introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat.
…
Simply open Reader or Acrobat and start working with the new capabilities, including:
AI Assistant: AI Assistant recommends questions based on a PDF’s content and answers questions about what’s in the document – all through an intuitive conversational interface.
Generative summary: Get a quick understanding of the content inside long documents with short overviews in easy-to-read formats.
Intelligent citations: Adobe’s custom attribution engine and proprietary AI generate citations so customers can easily verify the source of AI Assistant’s answers.
Easy navigation:
Formatted output:
Respect for customer data:
Beyond PDF: Customers can use AI Assistant with all kinds of document formats (Word, PowerPoint, meeting transcripts, etc.)
Essential skills to thrive with Sora AI
The realm of video editing isn’t about cutting and splicing.
A Video Editor should learn a diverse set of skills to earn money, such as:
Prompt Writing
Software Mastery
Problem-solving skills
Collaboration and communication skills
Creative storytelling and visual aesthetics
Invest in those skills that give you a competitive edge.
The text file that runs the internet — from theverge.com by David Pierce For decades, robots.txt governed the behavior of web crawlers. But as unscrupulous AI companies seek out more and more data, the basic social contract of the web is falling apart.
Blogs are like a screenplay to a mental movie the student has made. It’s a kind of narrative, but in a way that’s more associative, the way film can be.
Grush: What about your recorded online class sessions? Do they present another path to cinematic thinking?
Campbell: Yes! A couple years ago I started describing what I did with online learning as making movies on location. That referred to the way that I really wanted each of our class meetings to be: a kind of experience, not just for students to be here as I’m lecturing, though I may be doing that, but an experience that’s similar to a live television show. Or almost like a live recording session. Of course, we’re making something that is recorded on video, and you can go back and look at it to get the flow of the experience of our time together: the way in which that story exists through time.