Teaching with music can enhance learning in almost any subject area, says Sherena Small, a school social worker at Champaign Unit 4 School District in Illinois.
“It’s just such a good way to enhance what kids are learning,” says Small, who uses hip-hop and other music to teach social-emotional learning skills, including empathy and active listening. Earlier this year, Nearpod recognized Small as an Educator of the Year for her innovative efforts using Nearpod’s Flocabulary tool to incorporate music into class.
Speaking of multimedia, also see:
And here’s another interesting item from Dr. Burns:
“With Vision Pro, you’re no longer limited by a display,” Apple CEO Tim Cook said, introducing the new headset at WWDC 2023. Unlike earlier mixed reality reports, the system is far more focused on augmented reality than virtual. The company refresh to this new paradigm is “spatial computing.”
“This is the first Apple product you look through and not at.” – Tim Cook
And with those famous words, Apple announced a new era of consumer tech.
Apple’s new headset will operate on VisionOS – its new operating system – and will work with existing iOS and iPad apps. The new OS is created specifically for spatial computing — the blend of digital content into real space.
Vision Pro is controlled through hand gestures, eye movements and your voice (parts of it assisted by AI). You can use apps, change their size, capture photos and videos and more.
From DSC: Time will tell what happens with this new operating system and with this type of platform. I’m impressed with the engineering — as Apple wants me to be — but I doubt that this will become mainstream for quite some time yet. Also, I wonder what Steve Jobs would think of this…? Would he say that people would be willing to wear this headset (for long? at all?)? What about Jony Ive?
I’m sure the offered experiences will be excellent. But I won’t be buying one, as it’s waaaaaaaaay too expensive.
Ernst and Young dug a little deeper. “Today’s disruptive working landscape requires organisations to largely restructure the way they are doing work,” they noted in a bulletin in March this year. “Time now spent on tasks will be equally divided between people and machines. For these reasons, workforce roles will change and so do the skills needed to perform them.”
The World Economic Forum has pointed to this global skills gap and estimates that, while 85 million jobs will be displaced, 50% of all employees will need reskilling and/or upskilling by 2025. This, it almost goes without saying, will require Learning and Development departments to do the heavy-lifting in this initial transformational phase but also in an on-going capacity.
“And that’s the big problem,” says Hardman. “2025 is only two and half years away and the three pillars of L&D – knowledge transference, knowledge reinforcement and knowledge assessment – are crumbling. They have been unchanged for decades and are now, faced by revolutionary change, no longer fit for purpose.”
ChatGPT is the shakeup education needs— from eschoolnews.com by Joshua Sine As technology evolves, industries must evolve alongside it, and education is no exception–especially when students heavily and regularly rely on edtech
Key points:
Education must evolve along with technology–students will expect it
Embracing new technologies helps education leverage adaptive technology that engage student interest
Changed by Our Journey: Engaging Students through Simulive Learning — from er.educause.edu by Lisa Lenze and Megan Costello In this article, an instructor explains how she took an alternative approach to teaching—simulive learning—and discusses the benefits that have extended to her in-person classrooms.
Excerpts:
Mustering courage, Costello devised a novel way to (1) share the course at times other than when it was regularly scheduled and (2) fully engage with her students in the chat channel during the scheduled class meeting time. Her solution, which she calls simulive learning, required her to record her lectures and watch them with her students. (Courageous, indeed!)
Below, Costello and I discuss what simulive learning looks like, how it works, and how Costello has taken her version of remote synchronous teaching forward into current semesters.
Megan Costello: I took a different approach to remote synchronous online learning at the start of the pandemic. Instead of using traditional videoconferencing software to hold class, I prerecorded, edited, and uploaded videos of my lectures to a streaming website. This website allowed me to specify a time and date to broadcast my lectures to my students. Because the lectures were already prepared, I could watch and participate in the chat with my students as we encountered the materials together during the scheduled class time. I drove conversations in chat, asked questions, and got students engaged as we covered materials for the day. The students had my full attention.
Last night, Jensen Huang of NVIDIA gave his very first live keynote in 4-years.
The most show-stopping moment from the event was when he showed off the real-time AI in video games. A human speaks, the NPC responds, in real time and the dialogue was generated with AI on the fly. pic.twitter.com/TDoUM1zSiy
Bill Gates says AI is poised to destroy search engines and Amazon — from futurism.com by Victor Tangermann Who will win the AI [competition]? (DSC: I substituted the word competition here, as that’s what it is. It’s not a war, it’s a part of America’s way of doing business.)
“Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” Gates said during a Goldman Sachs event on AI in San Francisco this week, as quoted by CNBC.
These AI assistants could “read the stuff you don’t have time to read,” he said, allowing users to get to information without having to use a search engine like Google.
The online learning platform edX introduced two new tools on Friday based on OpenAI’s ChatGPT technology: an edX plugin for ChatGPT and a learning assistant embedded in the edX platform, called Xpert.
According to the company, its plugin will enable ChatGPT Plus subscribers to discover educational programs and explore learning content such as videos and quizzes across edX’s library of 4,200 courses.
Bing is now the default search for ChatGPT— from theverge.com by Tom Warren; via superhuman.beehiiv.com The close partnership between Microsoft and OpenAI leads to plug-in interoperability and search defaults.
Excerpt:
OpenAI will start using Bing as the default search experience for ChatGPT. The new search functionality will be rolling out to ChatGPT Plus users today and will be enabled for all free ChatGPT users soon through a plug-in in ChatGPT.
Students with mobility challenges may find it easier to use generative AI tools — such as ChatGPT or Elicit — to help them conduct research if that means they can avoid a trip to the library.
Students who have trouble navigating conversations — such as those along the autism spectrum — could use these tools for “social scripting.” In that scenario, they might ask ChatGPT to give them three ways to start a conversation with classmates about a group project.
Students who have trouble organizing their thoughts might benefit from asking a generative AI tool to suggest an opening paragraph for an essay they’re working on — not to plagiarize, but to help them get over “the terror of the blank page,” says Karen Costa, a faculty-development facilitator who, among other things, focuses on teaching, learning, and living with ADHD. “AI can help build momentum.”
ChatGPT is good at productive repetition. That is a practice most teachers use anyway to reinforce learning. But AI can take that to the next level by allowing students who have trouble processing information to repeatedly generate examples, definitions, questions, and scenarios of concepts they are learning.
It’s not all on you to figure this out and have all the answers. Partner with your students and explore this together.
From DSC: It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may findElicit, ScholarcyandSciteto be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things. .
There Is No A.I. — from newyorker.com by Jaron Lanier There are ways of controlling the new technology—but first we have to stop mythologizing it.
Excerpts:
If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.
…
The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.
What if you could create storyboards, change the color of a video, and generate relevant sound effects just by working with an AI Generator? It may be on the (generated sunset) horizon sooner rather than later: Adobe hopes to transform these processes through their AI product Firefly. As shared today at NAB Show 2023, Adobe looks to expand and innovate Firefly’s ability in the creative sphere, including font changes and effects, script analysis, color changes, generated music, storyboard edits, and more.
From DSC: Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:
I just clued in how insane text2vid will get soon. As crazy as this sounds, we will be able to generate movies from just minor prompts and the path there is pretty clear.
Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.
A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.
Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.
This is how I viewed a fascinating article about the so-called #AICinema movement. Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland. It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.
From DSC: How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?
Re-opened to the public last month after five years of planning and two-and-a-half years of renovations, The Media Museum of Sound and Vision in Hilversum in the Netherlands, is an immersive experience exploring modern media. It’s become a museum that continuously adapts to the actions of its visitors in order to reflect the ever-changing face of media culture.
How we consume media is revealed in five zones in the building: Share, Inform, Sell, Tell and Play. The Media Museum includes more than 50 interactives, with hundreds of hours of AV material and objects from history. The experience uses facial recognition and the user’s own smartphone to make it a personalised museum journey for everyone.
Photo from Mike Bink
From DSC: Wow! There is some serious AV work and creativity in the Media Museum of Sound and Vision!
A report by Pushpay, with data from over 1,700 organisations has found that while 91% of churches currently livestream worship services on social media, only 47% plan to do the same in the upcoming year.
The report, entitled ‘State of Church Tech 2023 is available to download here.
The reason cited for this shift is organisations’ lack of control on social media platforms to maintain engagement, as users are bombarded with pop-up windows, notifications, status updates, and more.
This is driving a rise in custom video players, website embeds, mobile app streaming, and other platforms that are better suited to maintain engagement.
Meet Adobe Firefly. — from adobe.com Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.
Generative AI made for creators. With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.
No lights. No camera. All action.Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.