From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.
DC: What?!? Wow. I should have seen this coming. I can see positives & negatives here. Virtual meetings could become a bit more creative/fun. But apps could be a bit scarier in some instances, such as with #telelegal.
If the U.S. economy contracts over the next year or two, as a majority of experts anticipate, there will be an enormous need for education and training. Workers will want to reskill and retrain for a reshaped world of work. Colleges and universities will have a critical role to play in getting Americans back to work and on a path toward more stable careers.
The 39 million Americans with some college but no credential will be the key to recovery, and colleges and universities must redouble their efforts to get these learners back in school and on a path toward new careers.
From DSC: Given the above is true/occurs, my question is this: Has higher ed kept up curriculum- and content-wise?
In 2023, we are going to a huge increase in content creation generated by AI avatars. The use cases the infinite – from demos and tutorials to billboards and even ads.
? Here are 6 companies that are currently defining the future of content creation:
Real Ways Professionals Can Use ChatGPT to Improve Job Performance
Let’s dive into some real examples of how professionals across sales, marketing, product management, project management, recruiting, and teaching can take advantage of this new tool and leverage it for even more impact in their careers.
Teachers and ChatGPT
Help with grading and feedback on student work.
Example prompt: “Tell me every grammar rule that’s been violated in this student’s essay: [paste in essay]”
Create personalized learning materials.
Example prompt: “Help me explain photosynthesis to a 10th grade student in a way similar to sports.”
Generate lesson plans and activities.
Example prompt: “Create an activity for 50 students that revolves around how to learn the different colors of the rainbow.” or “Generate a lesson plan for a high school English class on the theme of identity and self-discovery, suitable for a 45-minute class period.”
Write fake essays several reading levels below your class, then print them out, and have your students review and edit the AI’s work to make it better.
Example prompt: “Generate a 5th grade level short essay about Maya Angelou and her work.”
Providing one-on-one support to students. Example prompt: “How can I best empower an introverted student in my classroom during reading time?”
From DSC: I haven’t tried these prompts. Rather I post this because I’m excited about the potential of Artificial Intelligence (AI) to help people teach and to help people to learn.
Therefore, when adopting mobile AR to improve job performance, L&D professionals need to shift their mindset from offering training with AR alone to offering performance support with AR in the middle of the workflow.
The learning director from a supply chain industry pointed out that “70 percent of the information needed to build performance support systems already exists. The problem is it is all over the place and is available on different systems.”
It is the learning and development professional’s job to design a solution with the capability of the technology and present it in a way that most benefits the end users.
All participants revealed that mobile AR adoption in L&D is still new, but growing rapidly. L&D professionals face many opportunities and challenges. Understanding the benefits, challenges and opportunities of mobile AR used in the workplace is imperative.
A brief insert from DSC: Augmented Reality (AR) is about to hit the mainstream in the next 1-3 years. It will connect the physical world with the digital world in powerful, helpful ways (and likely in negative ways as well). I think it will be far bigger and more commonly used than Virtual Reality (VR). (By the way, I’m also including Mixed Reality (MR) within the greater AR domain.) With Artificial Intelligence (AI) making strides in object recognition, AR could be huge.
Learning & Development groups should ask for funding soon — or develop proposals for future funding as the new hardware and software products mature — in order to upskill at least some members of their groups in the near future.
As within Teaching & Learning Centers within higher education, L&D groups need to practice what they preach — and be sure to train their own people as well.
Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?
And what does it mean for professors if the answer to those questions is “yes”?
…
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.
“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.
And on the legal side of things:
In the legal education context, I’ve been playing around with generating fact patterns and short documents to use in exercises.
This month’s news has been overshadowed by the implosion of SBF’s TFX and the possible implosion of Elon Musk’s Twitter. All the noise doesn’t mean that important things aren’t happening. Many companies, organizations, and individuals are wrestling with the copyright implications of generative AI. Google is playing a long game: they believe that the goal isn’t to imitate art works, but to build better user interfaces for humans to collaborate with AI so they can create something new. Facebook’s AI for playing Diplomacy is an exciting new development. Diplomacy requires players to negotiate with other players, assess their mental state, and decide whether or not to honor their commitments. None of these are easy tasks for an AI. And IBM now has a 433 Qubit quantum chip–an important step towards making a useful quantum processor.
From DSC: I was watching a sermon the other day, and I’m always amazed when the pastor doesn’t need to read their notes (or hardly ever refers to them). And they can still do this in a much longer sermon too. Not me man.
It got me wondering about the idea of having a teleprompter on our future Augmented Reality (AR) glasses and/or on our Virtual Reality (VR) headsets. Or perhaps such functionality will be provided on our mobile devices as well (i.e., our smartphones, tablets, laptops, other) via cloud-based applications.
One could see one’s presentation, sermon, main points for the meeting, what charges are being brought against the defendant, etc. and the system would know to scroll down as you said the words (via Natural Language Processing (NLP)). If you went off script, the system would stop scrolling and you might need to scroll down manually or just begin where you left off.
For that matter, I suppose a faculty member could turn on and off a feed for an AI-based stream of content on where a topic is in the textbook. Or a CEO or University President could get prompted to refer to a particular section of the Strategic Plan. Hmmm…I don’t know…it might be too much cognitive load/overload…I’d have to try it out.
And/or perhaps this is a feature in our future videoconferencing applications.
But I just wanted to throw these ideas out there in case someone wanted to run with one or more of them.
Along these lines, see:
Tell me you’re not going to want a pair of #AR glasses ?
One of the best decisions I’ve ever made as a prof is to start building my classes to start with 1 week of onboarding followed by just 12 weeks of content. Last 2 weeks are just catchup and reassessment. Course is basically over at Thanksgiving.
How to Communicate with Brevity — from qaspire.com by Tanmay Vora; with thanks to Roberto Ferraro for this resource We live in a world of information overload. In such a world, communicating with brevity is a gift to others.
The Job: Online Certifications #85? DEC 1, 2022— from getrevue.co by Paul Fain Growing interest in online training for medical certifications and a private university that’s offering credit for MedCerts and other microcredentials.
Excerpt:
‘Train and Hire’ in Healthcare
The nation’s healthcare system continues to strain amid a severe staffing crisis. And the mounting desperation is prodding some employers to get more creative about how they hire, train, and retain healthcare workers.
MedCerts has seen growing demand for its online certification training, with strong interest in the 28-week medical assistant and 12-week phlebotomy technician programs.
The company has enrolled 55K students and roughly doubled its offerings during the last two years. Its fastest-growing segment is the train-and-hire model, where employers cover the full tuition and training costs for students.
“We are now helping several hundred people every month move from education to high-demand careers, and our pace and scale are still growing,” says Rafael Castaneda, MedCerts’ vice president of workforce development.
Stride Inc., a large online K-12 education provider, acquired MedCerts in 2020 for roughly $80M. The company’s 50+ self-paced career training programs in healthcare, IT, and professional development typically cost $4K in tuition and other fees. Most can be completed in six months, and the company offers on-demand support to all students for a year regardless of their program’s length.
Implementing UDL with a Focus on Accessibility UDL is a proven methodology that benefits all students, but when instructors embrace universal design, they need to consider how their decisions will affect students with disabilities.
Some key considerations to keep in mind:
Instructional materials should not require a certain type of sensory perception.
A presentation that includes images should have accurate alternative text (also called alt text) for those images.
Transcripts and captions should be provided for all audio content.
Color alone should not be used to convey information, since some students may not perceive color (or have different cultural understandings of colors).
Student presentations should also follow accessibility guidelines. This increases the student’s workload, but it’s an excellent opportunity to teach the importance of accessibility.
OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.
The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.
The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained to perform tasks by watching humans do them.
…
The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically.
“Most language learning software can help with the beginning part of learning basic vocabulary and grammar, but gaining any degree of fluency requires speaking out loud in an interactive environment,” Zwick told TechCrunch in an email interview. “To date, the only way people can get that sort of practice is through human tutors, which can also be expensive, difficult and intimidating.”
Speak’s solution is a collection of interactive speaking experiences that allow learners to practice conversing in English. Through the platform, users can hold open-ended conversations with an “AI tutor” on a range of topics while receiving feedback on their pronunciation, grammar and vocabulary.
It’s one of the top education apps in Korea on the iOS App Store, with over 15 million lessons started annually, 100,000 active subscribers and “double-digit million” annual recurring revenue.
If you last checked in on AI image makers a month ago & thought “that is a fun toy, but is far from useful…” Well, in just the last week or so two of the major AI systems updated.
You can now generate a solid image in one try. For example, “otter on a plane using wifi” 1st try: pic.twitter.com/DhiYeVMEEV
So, is this a cool development that will become a fun tool for many of us to play around with in the future? Sure. Will people use this in their work? Possibly. Will it disrupt artists across the board? Unlikely. There might be a few places where really generic artwork is the norm and the people that were paid very little to crank them out will be paid very little to input prompts. Look, PhotoShop and asset libraries made creating company logos very, very easy a long time ago. But people still don’t want to take the 30 minutes it takes to put one together, because thinking through all the options is not their thing. You still have to think through those options to enter an AI prompt. And people just want to leave that part to the artists. The same thing was true about the printing press. Hundreds of years of innovation has taught us that the hard part of the creation of art is the human coming up with the ideas, not the tools that create the art.
A quick comment from DSC: Possibly, at least in some cases. But I’ve seen enough home-grown, poorly-designed graphics and logos to make me wonder if that will be the case.
How to Teach With Deep Fake Technology — from techlearning.com by Erik Ofgang Despite the scary headlines, deep fake technology can be a powerful teaching tool
Excerpt:
The very concept of teaching with deep fake technology may be unsettling to some. After all, deep fake technology, which utilizes AI and machine learning and can alter videos and animate photographs in a manner that appears realistic, has frequently been covered in a negative light. The technology can be used to violate privacy and create fake videos of real people.
However, while these potential abuses of the technology are real and concerning that doesn’t mean we should turn a blind eye to the technology’s potential when using it responsibly, says Jaime Donally, a well-known immersive learning expert.
From DSC: I’m still not sure about this one…but I’ll try to be open to the possibilities here.
Recently, we spoke with three more participants of the AI Explorations program to learn about its ongoing impact in K-12 classrooms. Here, they share how the program is helping their districts implement AI curriculum with an eye toward equity in the classroom.
A hitherto stealth legal AI startup emerged from the shadows today with news via TechCrunch that it has raised $5 million in funding led by the startup fund of OpenAI, the company that developed advanced neural network AI systems such as GPT-3 and DALL-E 2.
The startup, called Harvey, will build on the GPT-3 technology to enable lawyers to create legal documents or perform legal research by providing simple instructions using natural language.
The company was founded by Winston Weinberg, formerly an associate at law firm O’Melveny & Myers, and Gabriel Pereyra, formerly a research scientist at DeepMind and most recently a machine learning engineer at Meta AI.
A class-action lawsuit filed in a federal court in California this month takes aim at GitHub Copilot, a powerful tool that automatically writes working code when a programmer starts typing. The coder behind the suit argues that GitHub is infringing copyright because it does not provide attribution when Copilot reproduces open-source code covered by a license requiring it.
…
Programmers have, of course, always studied, learned from, and copied each other’s code. But not everyone is sure it is fair for AI to do the same, especially if AI can then churn out tons of valuable code itself, without respecting the source material’s license requirements. “As a technologist, I’m a huge fan of AI ,” Butterick says. “I’m looking forward to all the possibilities of these tools. But they have to be fair to everybody.”
Whatever the outcome of the Copilot case, Villa says it could shape the destiny of other areas of generative AI. If the outcome of the Copilot case hinges on how similar AI-generated code is to its training material, there could be implications for systems that reproduce images or music that matches the style of material in their training data.
Also related to AI and art/creativity from Wired.com, see:
Picture Limitless Creativity at Your Fingertips— by Kevin Kelly Artificial intelligence can now make better art than most humans. Soon, these engines of wow will transform how we design just about everything.
Who Will Own the Art of the Future? — by Jessica Rizzo OpenAI has announced that it’s granting Dall-E users the right to commercialize their art. For now.