— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
.
On giving AI eyes and ears— from oneusefulthing.org by Ethan Mollick AI can listen and see, with bigger implications than we might realize.
Excerpt:
But even this is just the beginning, and new modes of using AI are appearing, which further increases their capabilities. I want to show you some examples of this emerging world, which I think will soon introduce a new wave of AI use cases, and accompanying disruption.
We need to recognize that these capabilities will continue to grow, and AI will be able to play a more active role in the real world by observing and listening. The implications are likely to be profound, and we should start thinking through both the huge benefits and major concerns today.
Even though generative AI is a new thing, it doesn’t change why students cheat. They’ve always cheated for the same reason: They don’t find the work meaningful, and they don’t think they can achieve it to their satisfaction. So we need to design assessments that students find meaning in.
Tricia Bertram Gallant
Caught off guard by AI— from chonicle.com by Beth McMurtrie and Beckie Supiano Professor scrambled to react to ChatGPT this spring — and started planning for the fall
Excerpt:
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
…OpenAI built tool use right into the GPT API with an update called function calling. It’s a little like a child’s ability to ask their parents to help them with a task that they know they can’t do on their own. Except in this case, instead of parents, GPT can call out to external code, databases, or other APIs when it needs to.
Each function in function calling represents a tool that a GPT model can use when necessary, and GPT gets to decide which ones it wants to use and when. This instantly upgrades GPT capabilities—not because it can now do every task perfectly—but because it now knows how to ask for what it wants and get it. .
.
How ChatGPT can help disrupt assessment overload— from timeshighereducation.com by David Carless Advances in AI are not necessarily the enemy – in fact, they should prompt long overdue consideration of assessment types and frequency, says David Carless
Excerpt:
Reducing the assessment burden could support trust in students as individuals wanting to produce worthwhile, original work. Indeed, students can be co-opted as partners in designing their own assessment tasks, so they can produce something meaningful to them.
A strategic reduction in quantity of assessment would also facilitate a refocusing of assessment priorities on deep understanding more than just performance and carries potential to enhance feedback processes.
If we were to tackle assessment overload in these ways, it opens up various possibilities. Most significantly there is potential to revitalise feedback so that it becomes a core part of a learning cycle rather than an adjunct at its end. End-of-semester, product-oriented feedback, which comes after grades have already been awarded, fails to encourage the iterative loops and spirals typical of productive learning. .
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day.This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
These claims conjure up the rosiest of images: human resource departments and their robot buddies solving discrimination in workplace hiring. It seems plausible, in theory, that AI could root out unconscious bias, but a growing body of research shows the opposite may be more likely.
…
Companies’ use of AI didn’t come out of nowhere: For example, automated applicant tracking systems have been used in hiring for decades. That means if you’ve applied for a job, your resume and cover letter were likely scanned by an automated system. You probably heard from a chatbot at some point in the process. Your interview might have been automatically scheduled and later even assessed by AI.
From DSC:
Here was my reflection on this:
DC: Along these lines, I wonder if Applicant Tracking Systems cause us to become like typecast actors and actresses — only thought of for certain roles. Pigeonholed.
— Daniel Christian (he/him/his) (@dchristian5) June 23, 2023
In June, ResumeBuilder.com surveyed more than 1,000 employees who are involved in hiring processes at their workplaces to find out about their companies’ use of AI interviews.
The results:
43% of companies already have or plan to adopt AI interviews by 2024
Two-thirds of this group believe AI interviews will increase hiring efficiency
15% say that AI will be used to make decisions on candidates without any human input
More than half believe AI will eventually replace human hiring managers
Watch OpenAI CEO Sam Altman on the Future of AI — from bloomberg.com Sam Altman, CEO & Co-Founder, OpenAI discusses the explosive rise of OpenAI and its products and what an AI-laced future can look like with Bloomberg’s Emily Chang at the Bloomberg Technology Summit.
The implementation of generative AI within these products will dramatically improve educators’ ability to deliver personalized learning to students at scale by enabling the application of personalized assessments and learning pathways based on individual student needs and learning goals. K-12 educators will also benefit from access to OpenAI technology…
Let’s look at some ideas of how law schools could use AI tools like Khanmigo or ChatGPT to support lectures, assignments, and discussions, or use plagiarism detection software to maintain academic integrity.
In particular, we’re betting on four trends for AI and L&D.
Rapid content production
Personalized content
Detailed, continuous feedback
Learner-driven exploration
In a world where only 7 percent of the global population has a college degree, and as many as three quarters of workers don’t feel equipped to learn the digital skills their employers will need in the future, this is the conversation people need to have.
…
Taken together, these trends will change the cost structure of education and give learning practitioners new superpowers. Learners of all backgrounds will be able to access quality content on any topic and receive the ongoing support they need to master new skills. Even small L&D teams will be able to create programs that have both deep and broad impact across their organizations.
Generative AI is set to play a pivotal role in the transformation of educational technologies and assisted learning. Its ability to personalize learning experiences, power intelligent tutoring systems, generate engaging content, facilitate collaboration, and assist in assessment and grading will significantly benefit both students and educators.
With today’s advancements in generative AI, that vision of personalized learning may not be far off from reality. We spoke with Dr. Kim Round, associate dean of the Western Governors University School of Education, about the potential of technologies like ChatGPT for learning, the need for AI literacy skills, why learning experience designers have a leg up on AI prompt engineering, and more. And get ready for more Star Trek references, because the parallels between AI and Sci Fi are futile to resist.
NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.
Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.
The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.
Also relevant to the item from Nvidia (above), see:
This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.
“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”
Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.
WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.
The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.
Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
***
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
What Is Agent Assist? — from blogs.nvidia.com Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.
Excerpt:
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.
It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.
From DSC: Is this type of thing going to provide a learning assistant/agent as well?
AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.
One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.
In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.
Microsoft has launched a GPT-4 enhanced Edge browser.
By integrating OpenAI’s GPT-4 technology with Microsoft Edge, you can now use ChatGPT as a copilot in your Bing browser. This delivers superior search results, generates content, and can even transform your copywriting skills (read on to find out how).
Benefits mentioned include: Better Search, Complete Answers, and Creative Spark.
The new interactive chat feature means you can get the complete answer you are looking for by refining your search by asking for more details, clarity, and ideas.
From DSC: I have to say that since the late 90’s, I haven’t been a big fan of web browsers from Microsoft. (I don’t like how Microsoft unfairly buried Netscape Navigator and the folks who had out-innovated them during that time.) As such, I don’t use Edge so I can’t fully comment on the above article.
But I do have to say that this is the type of thing that may make me reevaluate my stance regarding Microsoft’s browsers. Integrating GPT-4 into their search/chat functionalities seems like it would be a very solid, strategic move — at least as of late April 2023.
Speaking of new items coming from Microsoft, also see:
[On 4/27/23], Microsoft Designer, Microsoft’s AI-powered design tool, launched in public preview with an expanded set of features.
Announced in October, Designer is a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. It leverages user-created content and DALL-E 2, OpenAI’s text-to-image AI, to ideate designs, with drop-downs and text boxes for further customization and personalization.
…
Designer will remain free during the preview period, Microsoft says — it’s available via the Designer website and in Microsoft’s Edge browser through the sidebar. Once the Designer app is generally available, it’ll be included in Microsoft 365 Personal and Family subscriptions and have “some” functionality free to use for non-subscribers, though Microsoft didn’t elaborate.
Canva’s New AI Wonder Tools— from wondertools.substack.com by Jeremy Caplan A magic eraser, a branding kit, AI presentations, and more new features
Excerpt:
Canva launched a bunch of new features at a live event viewed by 1.5 million people globally. The Australian company is no longer an upstart. 125 million people use it monthly, including 13 million paid subscribers yielding $1.4 billion in revenue. Canva’s increasingly competing with Adobe to help people create eye-catching visuals. Here are its most useful new tricks.
As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”
Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.
Excerpt (emphasis DSC):
We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.
New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.
This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.
Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.
We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality: https://t.co/ySek2oevodpic.twitter.com/S61MTpddOV
LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”
Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.
“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.
10 gifts we unboxed at Canva Create — from canva.com Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.
Meet Adobe Firefly. — from adobe.com Experiment, imagine, and make an infinite range of creations with Firefly, a family of creative generative AI models coming to Adobe products.
Generative AI made for creators. With the beta version of the first Firefly model, you can use everyday language to generate extraordinary new content. Looking forward, Firefly has the potential to do much, much more.
No lights. No camera. All action.Realistically and consistently synthesize new videos. Either by applying the composition and style of an image or text prompt to the structure of a source video (Video to Video). Or, using nothing but words (Text to Video). It’s like filming something new, without filming anything at all.
What Can A.I. Art Teach Us About the Real Thing?— from newyorker.com by Adam Gopnik; with thanks to Mrs. Julie Bender for this resource The range and ease of pictorial invention offered by A.I. image generation are startling.
Excerpts:
The dall-e 2 system, by setting images free from neat, argumentative intentions, reducing them to responses to “prompts,” reminds us that pictures exist in a different world of meaning from prose.
…
And the power of images lies less in their arguments than in their ambiguities. That’s why the images that dall-e 2 makes are far more interesting than the texts that A.I. chatbots make. To be persuasive, a text demands a point; in contrast, looking at pictures, we can be fascinated by atmospheres and uncertainties.
…
One of the things that thinking machines have traditionally done is sharpen our thoughts about our own thinking.
And, so, “A Havanese at six pm on an East Coast beach in the style of a Winslow Homer watercolor”:
Art work by DALL-E 2 / Courtesy OpenAI
It is, as simple appreciation used to say, almost like being there, almost like her being there. Our means in art are mixed, but our motives are nearly always memorial. We want to keep time from passing and our loves alive. The mechanical collision of kinds first startles our eyes and then softens our hearts. It’s the secret system of art.
To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.
Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.
Introducing ChatGPT and Whisper APIs— from openai.com Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.
Excerpt:
ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.
Love this! Assigning collaborative annotation can help students engage with texts and the writing process and help prevent ChatGPT misuse. We can also invite students to critically annotate ChatGPT outputs. #ChatGPTedu#AItextedu#AIED#socialannotationhttps://t.co/wh2OkrAvL7
— Anna Mills, amills@mastodon.oeru.org, she/her (@EnglishOER) February 28, 2023
Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.
There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.
So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.
Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.
“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.
From DSC: NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.
SAN DIEGO (AP) — California voters on Tuesday approved a ballot measure backed by a celebrity lineup that included Barbra Streisand and Los Angeles-born rappers will.i.am and Dr. Dre that could pump as much as $1 billion a year from the state’s general fund into arts education.
Supporters said it would benefit public school programs that go beyond the traditional art, theater, dance and music classes to include graphic design, computer coding, animation, music composition and script writing.
Also from Goldie Blumenstyk:
@CeciliaOrphan and I have a new article out in @ChangeHigher on expanding our understanding of "value" in higher education to better capture the full contributions of regional public universities as broadly accessible anchor institutions. https://t.co/iY5eIP5uzb
Over the past few years, creative expression has become a widespread desire. From new social media platforms to our efforts to stand out in school, or at work through our ideas, creativity has become a vital skill for everyone. Our teams at Adobe are on a mission to outfit everyone to create.
The latest version of Adobe Creative Cloud, released [on 10/18/22] at Adobe MAX, includes innovations that support creativity for all.