The ChatGPT of music?— from joinsuperhuman.ai by Zain Kahn ALSO: EY releases new AI platform after $1.4B investment
Here’s what you need to know:
You can feed the app prompts for both music (like classical rock) and sounds (like raindrops on a window).
The platform can generate sounds across any genre and can mix and produce sounds from multiple genres too.
The output can be used for personal entertainment and commercial purposes, like audio content for an ad.
There’s a free version where you can generate 20 tracks of up to 45 seconds for non-commercial use. And the paid version comes with 500 tracks of up to 90 seconds and can be used for commercial purposes.
Stability AI, the world’s leading open generative AI company, today announced the launch of Stable Audio, the company’s first AI product for music and sound generation.
On the topic of AI, also see:
Generative AI and intellectual property — from ben-evans.com by Benedict Evans If you put all the world’s knowledge into an AI model and use it to make something new, who owns that and who gets paid? This is a completely new problem that we’ve been arguing about for 500 years.
Boosting Your Productivity: 5 ChatGPT Prompts That Work Wonders — from wireprompt.substack.com To truly harness the power of ChatGPT, we need prompts that are crystal clear, specific to our needs, and tailored to our unique situations. Here are five ChatGPT prompts that have proven to be productivity powerhouses, no matter your role or goals…
We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week:
1. Prompt examples: A blank page can be intimidating. At the beginning of a new chat, you’ll now see examples to help you get started.
2. Suggested replies: Go deeper with…
The movie trailer for “Genesis,” created with AI, is so convincing it caused a stir on Twitter [on July 27]. That’s how I found out about it. Created by Nicolas Neubert, a senior product designer who works for Elli by Volkswagen in Germany, the “Genesis” trailer promotes a dystopian sci-fi epic reminiscent of the Terminator. There is no movie, of course, only the trailer exists, but this is neither a gag nor a parody. It’s in a class of its own. Eerily made by man, but not.
? Trailer: Genesis (Midjourney + Runway)
We gave them everything.
Trusted them with our world.
To become enslaved – become hunted.
We have no choice.
Humanity must rise again to reclaim.
Google just published its 2023 environmental report, and one thing is for certain: The company’s water use is soaring.
The internet giant said it consumed 5.6 billion gallons of water in 2022, the equivalent of 37 golf courses. Most of that — 5.2 billion gallons — was used for the company’s data centers, a 20% increase on the amount Google reported the year prior.
We think prompt engineering (learning to converse with an AI) is overrated. Yup, we said it. We think the future of chat interfaces will be a combination of preloading context and then allowing AI to guide you to the information you seek.
From DSC: Agreed. I think we’ll see a lot more interface updates and changes to make things easier to use, find, develop.
Artificial Intelligence continues to dominate the news. In the past month, we’ve seen a number of major updates to language models: Claude 2, with its 100,000 token context limit; LLaMA 2, with (relatively) liberal restrictions on use; and Stable Diffusion XL, a significantly more capable version of Stable Diffusion. Does Claude 2’s huge context really change what the model can do? And what role will open access and open source language models have as commercial applications develop?
Google Lab Sessions are collaborations between “visionaries from all realms of human endeavor” and the company’s latest AI technology. [On 8/2/23], Google released TextFX as an “experiment to demonstrate how generative language technologies can empower the creativity and workflows of artists and creators” with Lupe Fiasco.
Google’s TextFX includes 10 tools and is powered by the PaLM 2 large language model via the PALM API. Meant to aid in the creative process of rappers, writers, and other wordsmiths, it is part of Google Labs.
Post-AI Assessment Design — from drphilippahardman.substack.com by Dr. Philippa Hardman A simple, three-step guide on how to design assessments in a post-AI world
Excerpt:
Step 1: Write Inquiry-Based Objectives
Inquiry-based objectives focus not just on the acquisition of knowledge but also on the development of skills and behaviours, like critical thinking, problem-solving, collaboration and research skills.
They do this by requiring learners not just to recall or “describe back” concepts that are delivered via text, lecture or video. Instead, inquiry-based objectives require learners to construct their own understanding through the process of investigation, analysis and questioning.
Just for a minute, consider how education would change if the following were true –
AIs “hallucinated” less than humans
AIs could write in our own voices
AIs could accurately do math
AIs understood the unique academic (and eventually developmental) needs of each student and adapt instruction to that student
AIs could teach anything any student wanted or need to know any time of day or night
AIs could do this at a fraction of the cost of a human teacher or professor
Fall 2026 is three years away. Do you have a three year plan? Perhaps you should scrap it and write a new one (or at least realize that your current one cannot survive). If you run an academic institution in 2026 the same way you ran it in 2022, you might as well run it like you would have in 1920. If you run an academic institution in 2030 (or any year when AI surpasses human intelligence) the same way you ran it in 2022, you might as well run it like you would have in 1820. AIs will become more intelligent than us, perhaps in 10-20 years (LeCun), though there could be unanticipated breakthroughs that lower the time frame to a few years or less (Benjio); it’s just a question of when, not “if.”
On one creative use of AI — from aiandacademia.substack.com by Bryan Alexander A new practice with pedagogical possibilities
Excerpt:
Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI?
…
How might this play out in a college or university class?
Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:
I continue to try to imagine ways generative AI can impact teaching and learning, including learning materials like textbooks. Earlier this week I started wondering – what if, in the future, educators didn’t write textbooks at all? What if, instead, we only wrote structured collections of highly crafted prompts? Instead of reading a static textbook in a linear fashion, the learner would use the prompts to interact with a large language model.These prompts could help learners ask for things like:
overviews and in-depth explanations of specific topics in a specific sequence,
examples that the learner finds personally relevant and interesting,
interactive practice – including open-ended exercises – with immediate, corrective feedback,
the structure of the relationships between ideas and concepts,
Designed for K12 and Higher-Ed Educators & Administrators, this conference aims to provide a platform for educators, administrators, AI experts, students, parents, and EdTech leaders to discuss the impact of AI on education, address current challenges and potentials, share their perspectives and experiences, and explore innovative solutions. A special emphasis will be placed on including students’ voices in the conversation, highlighting their unique experiences and insights as the primary beneficiaries of these educational transformations.
The use of generative AI in K-12 settings is complex and still in its infancy. We need to consider how these tools can enhance student creativity, improve writing skills, and be transparent with students about how generative AI works so they can better understand its limitations. As with any new tech, our students will be exposed to it, and it is our task as educators to help them navigate this new territory as well-informed, curious explorers.
The education ministry has emphasized the need for students to understand artificial intelligence in new guidelines released Tuesday, setting out how generative AI can be integrated into schools and the precautions needed to address associated risks.
Students should comprehend the characteristics of AI, including its advantages and disadvantages, with the latter including personal information leakages and copyright infringement, before they use it, according to the guidelines. They explicitly state that passing off reports, essays or any other works produced by AI as one’s own is inappropriate.
Thanks to the rapid development of artificial intelligence tools like Dall-E and ChatGPT, my brother-in-law has been wrestling with low-level anxiety: Is it a good idea to steer his son down this path when AI threatens to devalue the work of creatives? Will there be a job for someone with that skill set in 10 years? He’s unsure. But instead of burying his head in the sand, he’s doing what any tech-savvy parent would do: He’s teaching his son how to use AI.
In recent months the family has picked up subscriptions to AI services. Now, in addition to drawing and sculpting and making movies and video games, my nephew is creating the monsters of his dreams with Midjourney, a generative AI tool that uses language prompts to produce images.
To bridge this knowledge gap, I decided to make a quick little dictionary of AI terms specifically tailored for educators worldwide. Initially created for my own benefit, I’ve reworked my own AI Dictionary for Educators and expanded it to help my fellow teachers embrace the advancements AI brings to education.
Chatbots, data scientists, software engineers. As clients demand more for less, law firms are hiring growing numbers of staff who’ve studied technology not tort law to try and stand out from their rivals.
Law firms are advertising for experts in artificial intelligence “more than ever before,” says Chris Tart-Roberts, head of the legal technology practice at Macfarlanes, describing a trend he says began about six months ago.
Law firm leaders and consultants are unsure of how AI use will ultimately impact the legal workforce.
Consults are advising law firms and attorneys alike to adapt to the use of generative AI, viewing this as an opportunity for attorneys to learn new skills and law firms to take a look at their business models.
Split between foreseeing job cuts and opportunities to introduce new skills and additional efficiencies into the office, firm leaders and consultants remain uncertain about the impact of artificial intelligence on the legal workforce.
However, one thing is certain: law firms and attorneys need to adapt and learn how to integrate this new technology in their business models, according to consultants.
Are you overwhelmed by countless cases, complex legal concepts, and endless readings? Law School AI is here to help. Our cutting-edge AI chatbot is designed to provide law students with an accessible, efficient, and engaging way to learn the law. Our chatbot simplifies complex legal topics, delivers personalized study guidance, and answers your questions in real-time – making your law school journey a whole lot easier.
Job title of the future: metaverse lawyer— from technologyreview.com by Amanda Smith Madaline Zannes’s virtual offices come with breakout rooms, an art gallery… and a bar. .
Excerpt:
Lot #651 on Somnium Space belongs to Zannes Law, a Toronto-based law firm. In this seven-level metaverse office, principal lawyer Madaline Zannes conducts private consultations with clients, meets people wandering in with legal questions, hosts conferences, and gives guest lectures. Zannes says that her metaverse office allows for a more immersive, imaginative client experience. She hired a custom metaverse builder to create the space from scratch—with breakout rooms, presentation stages, offices to rent, an art gallery, and a rooftop bar.
Greg spoke with an AI guest named Justis for this episode. Justis, powered by OpenAI’s GPT-4, was able to have a natural conversation with Greg and provide insightful perspectives on the use of generative AI in the legal industry, specifically in law firms.
In the first part of their discussion, Justis gave an overview of the legal industry’s interest in and uncertainty around adopting generative AI. While many law firm leaders recognize its potential, some are unsure of how it fits into legal work or worry about risks. Justis pointed to examples of firms exploring AI and said letting lawyers experiment with the tools could help identify use cases.
Putting Humans First: Solving Real-Life Problems With Legal Innovation— from abovethelaw.com by Olga Mack Placing the end-user at the heart of the process allows innovators to identify pain points and create solutions that directly address the unique needs and challenges individuals and businesses face.
A pilot project designed to test the potential of artificial intelligence tools at McCarthy Tétrault LLP showed that certain types of applications for the legal profession seemed to work better than others, panellists told attendees to the recent Canadian Lawyer Legal Tech Summit.
“I would say that the results were mixed,” David Cohen, senior director of client service delivery for the firm. During the panel, moderated by University of Calgary assistant professor Gideon Christian, Cohen spoke about a pilot of about 40 lawyers from different practices at the firm who used an AI platform with only public data.
The group [testing the platform] said it needs to get better before we start using this for research,” he said. However, he said, when it came to tasks like generating documents, reviewing 100-page cases “and summarizing and analyzing them,” the AI platforms did much better.
To help with this, our Client Success Team have summarised the eight key legal technology trends in the market, as well as the themes discussed at recent legal technology events and conferences including the British Legal Technology Forum and iManage ConnectLive Virtual 2023, both of which we were proud to sponsor.
On a somewhat related note, also see:
Designing the Law Office of the Future — from workdesign.com by Deborah Nemeth Deborah Nemeth of SmithGroup shares how inspiration from higher education and hospitality can help inform the next evolution of the law office.
My hypothesis and research suggest that as bar associations and the ABA begin to recognize the on-going systemic issues of high-cost legal education, growing legal deserts (where no lawyer serves a given population), on-going and pervasive access to justice issues, and a public that is already weary of the legal system – alternative options that are already in play might become more supported.
What might that look like?
The combination of AI-assisted education with traditional legal apprenticeships has the potential to create a rich, flexible, and engaging learning environment. Here are three scenarios that might illustrate what such a combination could look like:
Scenario One – Personalized Curriculum Development
Scenario Two – On-Demand Tutoring and Mentoring
Scenario Three – AI-assisted Peer Networks and Collaborative Learning:
We know that there are challenges – a threat to human jobs, the potential implications for cyber security and data theft, or perhaps even an existential threat to humanity as a whole. But we certainly don’t yet have a full understanding of all of the implications. In fact, a World Economic Forum report recently stated that organizations “may currently underappreciate AI-related risks,” with just four percent of leaders considering the risk level to be “significant.”
A survey carried out by analysts Baker McKenzie concluded that many C-level leaders are over-confident in their assessments of organizational preparedness in relation to AI. In particular, it exposed concerns about the potential implications of biased data when used to make HR decisions.
AI & lawyer training: How law firms can embrace hybrid learning & development — thomsonreuters.com A big part of law firms’ successful adaptation to the increased use of ChatGPT and other forms of generative AI, may depend upon how firmly they embrace online learning & development tools designed for hybrid work environments
Excerpt:
As law firms move forward in using of advanced artificial intelligence such as ChatGPT and other forms of generative AI, their success may hinge upon how they approach lawyer training and development and what tools they enlist for the process.
One of the tools that some law firms use to deliver a new, multi-modal learning environment is an online, video-based learning platform, Hotshot, that delivers more than 250 on-demand courses on corporate, litigation, and business skills.
Ian Nelson, co-founder of Hotshot, says he has seen a dramatic change in how law firms are approaching learning & development (L&D) in the decade or so that Hotshot has been active. He believes the biggest change is that 10 years ago, firms hadn’t yet embraced the need to focus on training and development.
From DSC: Heads up law schools. Are you seeing/hearing this!?
Are we moving more towards a lifelong learning model within law schools?
If not, shouldn’t we be doing that?
Are LLM programs expanding quickly enough? Is more needed?
Bard, Google’s answer to OpenAI’s ChatGPT, is getting new generative AI capabilities courtesy of Adobe.
Adobe announced today that Firefly, its recently introduced collection of AI models for generating media content, is coming to Bard alongside Adobe’s free graphic design tool, Adobe Express. Firefly — currently in public beta — will become the “premier generative AI partner” for Bard, Adobe says, powering text-to-image capabilities.
Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.
In particular, we’re betting on four trends for AI and L&D.
Rapid content production
Personalized content
Detailed, continuous feedback
Learner-driven exploration
In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.
…
Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.
Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.
With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.
NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.
Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.
The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.
Also relevant to the item from Nvidia (above), see:
No better way for Judges to learn about both how AI could improve courts and the risks of AI (e.g., deepfakes) than to experiment with it.
Check out the AI avatars that @Judgeschlegel created: https://t.co/UqbJ2PHA09#AI4Law#Law4AI
Most reported feeling the justice system was “unfair,” and many described a sense of “the odds being stacked against them.”
Advocates say the rising number of lawyer-free litigants is problematic. The legal system is meant to be adversarial — with strong lawyers on each side — but the high rate of self-representation creates lopsided justice, pitting an untrained individual against a professional.
AI will likely make lawyer’s jobs easier (or, at least, more interesting) for some tasks, however the effects it may have on the legal profession could be the real legacy of the technology. Schafer pointed to its potential to improve access to justice for people who want legal representation but can’t get it for whatever reason.
A New Era for Education — from linkedin.com by Amit Sevak, CEO of ETS and Timothy Knowles, President of the Carnegie Foundation for the Advancement of Teaching
Excerpt (emphasis DSC):
It’s not every day you get to announce a revolution in your sector. But today, we’re doing exactly that. Together, we are setting out to overturn 117 years of educational tradition. … The fundamental assumption [of the Carnegie Unit] is that time spent in a classroom equals learning. This formula has the virtue of simplicity. Unfortunately, a century of research tells us that it’s woefully inadequate.
From DSC: It’s more than interesting to think that the Carnegie Unit has outlived its usefulness and is breaking apart. In fact, the thought is very profound.
If that turns out to be the case, the ramifications will be enormous and we will have the opportunity to radically reinvent/rethink/redesign what our lifelong learning ecosystems will look like and provide.
So I appreciate what Amit and Timothy are saying here and I appreciate their relaying what the new paradigm might look like. It goes with the idea of using design thinking to rethink how we build/reinvent our learning ecosystems. They assert:
It’s time to change the paradigm. That’s why ETS and the Carnegie Foundation have come together to design a new future of assessment.
Whereas the Carnegie Unit measures seat time, the new paradigm willmeasureskills—with a focus on the ones we know are most important for success in career and in life.
Whereas the Carnegie Unit never leaves the classroom, the new paradigm willcapture learning wherever it takes place—whether that is in after-school activities, during a work-experience placement, in an internship, on an apprenticeship, and so on.
Whereas the Carnegie Unit offers only one data point—pass or fail—the new paradigm willgenerate insights throughout the learning process, the better to guide students, families, educators, and policymakers.
I could see this type of information being funneled into peoples’ cloud-based learner profiles — which we as individuals will own and determine who else can access them. I diagrammed this back in January of 2017 using blockchain as the underlying technology. That may or may not turn out to be the case. But the concept will still hold I think — regardless of the underlying technology(ies).
For example, we are seeing a lot more articles regarding things like Comprehensive Learner Records (CLR) or Learning and Employment Records (LER; examplehere), and similar items.
Speaking of reinventing our learning ecosystems, also see:
Every year, I write a year-end wrap-up of the most significant developments in legal technology.
At the end of the past decade, I decided to look back on the most significant developments of the 2010s as a whole. It may well have been the most tumultuous decade ever in changing how legal services are delivered.
Here, I revisit those changes — and add a few post-2020 updates.
From DSC: Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:
I just clued in how insane text2vid will get soon. As crazy as this sounds, we will be able to generate movies from just minor prompts and the path there is pretty clear.
Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.
A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.
Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.
This is how I viewed a fascinating article about the so-called #AICinema movement. Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland. It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.
From DSC: How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?
As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”
Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.
Excerpt (emphasis DSC):
We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.
New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.
This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.
Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.
We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality: https://t.co/ySek2oevodpic.twitter.com/S61MTpddOV
LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”
Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.
“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.
10 gifts we unboxed at Canva Create — from canva.com Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.