The Verge | What’s Next With AI | February 2024 | Consumer Survey
DC: Just because we can doesn’t mean we should.
It brings to mind #AI and #robotics and the #military — hmmmm…. https://t.co/1J4XKiHRUl
— Daniel Christian (he/him/his) (@dchristian5) April 25, 2024
Microsoft AI creates talking deepfakes from single photo — from inavateonthenet.net
The Great Hall – where now with AI? It is not ‘Human Connection V Innovative Technology’ but ‘Human Connection + Innovative Technology’ — from donaldclarkplanb.blogspot.com by Donald Clark
The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared.
Techno-Social Engineering: Why the Future May Not Be Human, TikTok’s Powerful ForYou Algorithm, & More — from by Misha Da Vinci
Things to consider as you dive into this edition:
- As we increasingly depend on technology, how is it changing us?
- In the interaction between humans and technology, who is adapting to whom?
- Is the technology being built for humans, or are we being changed to fit into tech systems?
- As time passes, will we become more like robots or the AI models we use?
- Over the next 30 years, as we increasingly interact with technology, who or what will we become?
It’s been an insane week for AI (part 2)
Here are 14 most impressive reveals from this week:
1/ China just released OpenAI’s Sora rival “Vidu” which can create realistic clips in seconds.pic.twitter.com/MnTv9Wxpef
— Barsee ? (@heyBarsee) April 27, 2024
Description:
I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.
From DSC:
This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.
Smart(er) Glasses: Introducing New Ray-Ban | Meta Styles + Expanding Access to Meta AI with Vision — from meta.com
- Share Your View on a Video Call
- Meta AI Makes Your Smart Glasses Smarter
- All In On AI-Powered Hardware
New Ray-Ban | Meta Smart Glasses Styles and Meta AI Updates — from about.fb.com
Takeaways
- We’re expanding the Ray-Ban Meta smart glasses collection with new styles.
- We’re adding video calling with WhatsApp and Messenger to share your view on a video call.
- We’re rolling out Meta AI with Vision, so you can ask your glasses about what you’re seeing and get helpful information — completely hands-free.
Instructors as Innovators: a Future-focused Approach to New AI Learning Opportunities, With Prompts –from papers.ssrn.com by Ethan R. Mollick and Lilach Mollick
Abstract
This paper explores how instructors can leverage generative AI to create personalized learning experiences for students that transform teaching and learning. We present a range of AI-based exercises that enable novel forms of practice and application including simulations, mentoring, coaching, and co-creation. For each type of exercise, we provide prompts that instructors can customize, along with guidance on classroom implementation, assessment, and risks to consider. We also provide blueprints, prompts that help instructors create their own original prompts. Instructors can leverage their content and pedagogical expertise to design these experiences, putting them in the role of builders and innovators. We argue that this instructor-driven approach has the potential to democratize the development of educational technology by enabling individual instructors to create AI exercises and tools tailored to their students’ needs. While the exercises in this paper are a starting point, not a definitive solutions, they demonstrate AI’s potential to expand what is possible in teaching and learning.
Are we ready to navigate the complex ethics of advanced AI assistants? — from futureofbeinghuman.com by Andrew Maynard
An important new paper lays out the importance and complexities of ensuring increasingly advanced AI-based assistants are developed and used responsibly
Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.
It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.
The paper — The Ethics of Advanced AI Assistants — is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.
Key questions for the ethical and societal analysis of advanced AI assistants include:
- What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
- What capabilities would an advanced AI assistant have? How capable could these assistants be?
- What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
- Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
- What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
- What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
- What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
- How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
- Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
- …
The AI Tools in Education Database — from aitoolsdirectory.notion.site; via George Siemens
Since AI in education has been moving at the speed of light, we built this AI Tools in Education database to keep track of the most recent AI tools in education and the changes that are happening every day. This database is intended to be a community resource for educators, researchers, students, and other edtech specialists looking to stay up to date. This is a living document, so be sure to come back for regular updates.
Another Workshop for Faculty and Staff — from aiedusimplified.substack.com by Lance Eaton
A recent workshop with some adjustments.
The day started out with a short talk about AI (slides). Some of it is my usual schtick where I do a bit of Q&A with folks around myths and misunderstandings of generative AI in order to establish some common ground. These are often useful both in setting the tone and giving folks a sense of how I come to explore generative AI: with a mixture of humor, concern, curiosity, and of course, cat pics.
From there, we launched into a series of mini-workshops where folks had time to first play around with some previously created prompts around teaching and learning before moving onto prompts for administrative work. The prompts and other support materials are in this Workshop Resource Document. The goal was to just get them into using one or more AI tools with some useful prompts so they can learn more about its capabilities.
The Edtech Insiders Rundown of ASU+GSV 2024 — from edtechinsiders.substack.com by by Sarah Morin, Alex Sarlin, and Ben Kornell
And more on Edtech Insiders+, upcoming events, Gauth, AI Reading Tutors, The Artificial Intelligence Interdisciplinary Institute, and TeachAI Policy Resources
Alex Sarlin
4. Everyone is Edtech Now
This year, in addition to investors, entrepreneurs, educators, school leaders, university admins, non-profits, publishers, and operators from countless edtech startups and incumbents, there were some serious big tech companies in attendance like Meta, Google, OpenAI, Microsoft, Amazon, Tiktok, and Canva. Additionally, a horde of management consultancies, workforce organizations, mental health orgs, and filmmakers were in attendance.
Edtech continues to expand as an industry category and everyone is getting involved.
Ep 18 | Rethinking Education, Lessons to Unlearn, Become a Generalist, & More — Ana Lorena Fábrega — from mishadavinci.substack.com by Misha da Vinci
It was such a delight to chat with Ana. She’s brilliant and passionate, a talented educator, and an advocate for better ways of learning for children and adults. We cover ways to transform schools so that students get real-world skills, learn resilience and how to embrace challenges, and are prepared for an unpredictable future. And we go hard on why we must keep learning no matter our age, become generalists, and leverage technology in order to adapt to the fast-changing world.
Misha also featured an item re: the future of schooling and it contained this graphic:
Texas is replacing thousands of human exam graders with AI — from theverge.com by Jess Weatherbed
The Texas Tribune reports an “automated scoring engine” that utilizes natural language processing — the technology that enables chatbots like OpenAI’s ChatGPT to understand and communicate with users — is being rolled out by the Texas Education Agency (TEA) to grade open-ended questions on the State of Texas Assessments of Academic Readiness (STAAR) exams. The agency is expecting the system to save $15–20 million per year by reducing the need for temporary human scorers, with plans to hire under 2,000 graders this year compared to the 6,000 required in 2023.
Debating About AI: An Easy Path to AI Awareness and Basic Literacy — from stefanbauschard.substack.com by Stefan Bauschard
If you are an organization committed to AI literacy, consider sponsoring some debate topics and/or debates next year and expose thousands of students to AI literacy.
Resolved: Teachers should integrate generative AI in their teaching and learning.
The topic is simple but raises an issue that students can connect with.
While helping my students prepare and judging debates, I saw students demonstrate an understanding of many key issues and controversies.
These included—
*AI writing assessment/grading
*Bias
*Bullying
*Cognitive load
*Costs of AI systems
*Declining test scores
*Deep fakes
*Differentiation
*Energy consumption
*Hallucinations
*Human-to-human connection
*Inequality and inequity in access
*Neurodiversity
*Personalized learning
*Privacy
*Regulation (lack thereof)
*The future of work and unemployment
*Saving teachers time
*Soft skills
*Standardized testing
*Student engagement
*Teacher awareness and AI training; training resource trade-offs
*Teacher crowd-out
*Transparency and explainability
*Writing detectors (students had an exaggerated sense of the workability of these tools).
AI Cheatsheet Collection — from enchanting-trader-463.notion.site; via George Siemens
Here are the 30 best AI Cheat Sheets/Guides we collected from the internet
Generative AI: Empower your journey with AI solutions
From The Rundown AI
The Rundown: Adobe just announced a new upgrade to its Firefly image generation model, bringing improvements in image quality, stylization capabilities, speed, and details – along with new AI integrations.
The details:
- Firefly Image 3 promises new photorealistic quality, improved text rendering, better prompt understanding, and enhanced illustration capabilities.
- New Structure and Style Reference tools allow users more precise control over generations.
- Photoshop updates include an improved Generative Fill, Generate Image, Generate Similar, Generate Background, and Enhance Detail.
- Adobe emphasized training the model on licensed content, with Firefly images automatically getting an AI metadata tag.
Why it matters…
Beyond the Hype: Taking a 50 Year Lens to the Impact of AI on Learning — from nafez.substack.com by Nafez Dakkak and Chris Dede
How do we make sure LLMs are not “digital duct tape”?
[Per Chris Dede] We often think of the product of teaching as the outcome (e.g. an essay, a drawing, etc.). The essence of education, in my view, lies not in the products or outcomes of learning but in the journey itself. The artifact is just a symbol that you’ve taken the journey.
The process of learning — the exploration, challenges, and personal growth that occur along the way — is where the real value lies. For instance, the act of writing an essay is valuable not merely for the final product but for the intellectual journey it represents. It forces you to improve and organize your thinking on a subject.
This distinction becomes important with the rise of generative AI, because it uniquely allows us to produce these artifacts without taking the journey.
As I’ve argued previously, I am worried that all this hype around LLMs renders them a “type of digital duct-tape to hold together an obsolete industrial-era educational system”.
Speaking of AI in our learning ecosystems, also see:
On Building a AI Policy for Teaching & Learning — from by Lance Eaton
How students drove the development of a policy for students and faculty
Well, last month, the policy was finally approved by our Faculty Curriculum Committee and we can finally share the final version: AI Usage Policy. College Unbound also created (all-human, no AI used) a press release with the policy and some of the details.
To ensure you see this:
- Usage Guidelines for AI Generative Tools at College Unbound
These guidelines were created and reviewed by College Unbound students in Spring 2023 with the support of Lance Eaton, Director of Faculty Development & Innovation. The students include S. Fast, K. Linder-Bey, Veronica Machado, Erica Maddox, Suleima L., Lora Roy.
ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds — from psypost.org by Eric W. Dolan
A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work. The study, published in the Canadian Psychological Association’s Mind Pad, found that “false citation rates” across various psychology subfields ranged from 6% to 60%. Surprisingly, these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.
…
MacDonald found that a total of 32.3% of the 300 citations generated by ChatGPT were hallucinated. Despite being fabricated, these hallucinated citations were constructed with elements that appeared legitimate — such as real authors who are recognized in their respective fields, properly formatted DOIs, and references to legitimate peer-reviewed journals.
Forbes 2024 AI 50 List: Top Artificial Intelligence Startups — from forbes.com by Kenrick Cai
The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.
Adobe Previews Breakthrough AI Innovations to Advance Professional Video Workflows Within Adobe Premiere Pro — from news.adobe.com
- New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
- Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day - AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive
Also relevant see:
- Adobe Announces New AI Tools For Premiere Pro — from forbes.com by Mark Sparrow
- Adobe adds more AI and Udio upends AI music — from heatherbcooper.substack.com by Heather Cooper
Is Sora coming to Premiere Pro?
The pace of AI and Robotics has been incredible.
So, I share the most important research and developments every week.
Here’s everything that happened and how to make sense out of it:
— Brett Adcock (@adcock_brett) April 14, 2024
AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
Addressing equity and ethics in artificial intelligence — from apa.org by Zara Abrams
Algorithms and humans both contribute to bias in AI, but AI may also hold the power to correct or reverse inequities among humans
“The conversation about AI bias is broadening,” said psychologist Tara Behrend, PhD, a professor at Michigan State University’s School of Human Resources and Labor Relations who studies human-technology interaction and spoke at CES about AI and privacy. “Agencies and various academic stakeholders are really taking the role of psychology seriously.”
NY State Bar Association Joins Florida and California on AI Ethics Guidance – Suggests Some Surprising Implications — from natlawreview.com by James G. Gatto
The NY State Bar Association (NYSBA) Task Force on Artificial Intelligence has issued a nearly 80 page report (Report) and recommendations on the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession. This detailed Report also reviews AI-based software, generative AI technology and other machine learning tools that may enhance the profession, but which also pose risks for individual attorneys’ understanding of new, unfamiliar technology, as well as courts’ concerns about the integrity of the judicial process. It also makes recommendations for NYSBA adoption, including proposed guidelines for responsible AI use. This Report is perhaps the most comprehensive report to date by a state bar association. It is likely this Report will stimulate much discussion.
For those of you who want the “Cliff Notes” version of this report, here is a table that summarizes by topic the various rules mentioned and a concise summary of the associated guidance.
The Report includes four primary recommendations:
AI for the physical world — from superhuman.ai by Zain Kahn
Excerpt: (emphasis DSC)
A new company called Archetype is trying to tackle that problem: It wants to make AI useful for more than just interacting with and understanding the digital realm. The startup just unveiled Newton — “the first foundation model that understands the physical world.”
What’s it for?
A warehouse or factory might have 100 different sensors that have to be analyzed separately to figure out whether the entire system is working as intended. Newton can understand and interpret all of the sensors at the same time, giving a better overview of how everything’s working together. Another benefit: You can ask Newton questions in plain English without needing much technical expertise.
How does it work?
- Newton collects data from radar, motion sensors, and chemical and environmental trackers
- It uses an LLM to combine each of those data streams into a cohesive package
- It translates that data into text, visualizations, or code so it’s easy to understand
Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data — from venturebeat.com by Michael Nuñez; via Tom Barrett’s Prompcraft e-newsletter
Apple has entered into a significant agreement with stock photography provider Shutterstock to license millions of images for training its artificial intelligence models. According to a Reuters report, the deal is estimated to be worth between $25 million and $50 million, placing Apple among several tech giants racing to secure vast troves of data to power their AI systems.
AWS, Educause partner on generative AI readiness tool — from edscoop.com by Skylar Rispens
Amazon Web Services and the nonprofit Educause announced a new tool designed to help higher education institutions gauge their readiness to adopt generative artificial intelligence.
Amazon Web Services and the nonprofit Educause on Monday announced they’ve teamed up to develop a tool that assesses how ready higher education institutions are to adopt generative artificial intelligence.
Through a series of curated questions about institutional strategy, governance, capacity and expertise, AWS and Educause claim their assessment can point to ways that operations can be improved before generative AI is adopted to support students and staff.
“Generative AI will transform how educators engage students inside and outside the classroom, with personalized education and accessible experiences that provide increased student support and drive better learning outcomes,” Kim Majerus, vice president of global education and U.S. state and local government at AWS, said in a press release. “This assessment is a practical tool to help colleges and universities prepare their institutions to maximize this technology and support students throughout their higher ed journey.”
Speaking of AI and our learning ecosystems, also see:
Gen Z Wants AI Skills And Businesses Want Workers Who Can Apply AI: Higher Education Can Help — from forbes.com by Bruce Dahlgren
At a moment when the value of higher education has come under increasing scrutiny, institutions around the world can be exactly what learners and employers both need. To meet the needs of a rapidly changing job market and equip learners with the technical and ethical direction needed to thrive, institutions should familiarize students with the use of AI and nurture the innately human skills needed to apply it ethically. Failing to do so can create enormous risk for higher education, business and society.
What is AI literacy?
To effectively utilize generative AI, learners will need to grasp the appropriate use cases for these tools, understand when their use presents significant downside risk, and learn to recognize abuse to separate fact from fiction. AI literacy is a deeply human capacity. The critical thinking and communication skills required are muscles that need repeated training to be developed and maintained.
The University Student’s Guide To Ethical AI Use — from studocu.com; with thanks to Jervise Penton at 6XD Media Group for this resource
This comprehensive guide offers:
- Up-to-date statistics on the current state of AI in universities, how institutions and students are currently using artificial intelligence
- An overview of popular AI tools used in universities and its limitations as a study tool
- Tips on how to ethically use AI and how to maximize its capabilities for students
- Current existing punishment and penalties for cheating using AI
- A checklist of questions to ask yourself, before, during, and after an assignment to ensure ethical use
Some of the key facts you might find interesting are:
- The total value of AI being used in education was estimated to reach $53.68 billion by the end of 2032.
- 68% of students say using AI has impacted their academic performance positively.
- Educators using AI tools say the technology helps speed up their grading process by as much as 75%.