So, what should you do? You really need to start trying out these AI tools. They’re getting cheaper and better, and they can genuinely help save time or make work easier—ignoring them is like ignoring smartphones ten years ago.
Just keep two big things in mind:
Making the next super-smart AI costs a crazy amount of money and uses tons of power (seriously, they’re buying nuclear plants and pushing coal again!).
Companies are still figuring out how to make AI perfectly safe and fair—cause it still makes mistakes.
So, use the tools, find what helps you, but don’t trust them completely.
We’re building this plane mid-flight, and Stanford’s report card is just another confirmation that we desperately need better safety checks before we hit major turbulence.
I had the privilege of moderating a discussion between Josh Eyler and Robert Cummings about the future of AI in education at the University of Mississippi’s recent AI Winter Institute for Teachers. I work alongside both in faculty development here at the University of Mississippi. Josh’s position on AI sparked a great deal of debate on social media:
…
To make my position clear about the current AI in education discourse I want to highlight several things under an umbrella of “it’s very complicated.”
Most importantly, we all deserve some grace here. Dealing with generative AI in education isn’t something any of us asked for. It isn’t normal. It isn’t fixable by purchasing a tool or telling faculty to simply ‘prefer not to’ use AI. It is and will remain unavoidable for virtually every discipline taught at our institutions.
If one good thing happens because of generative AI let it be that it helps us clearly see how truly complicated our existing relationships with machines are now. As painful as this moment is, it might be what we need to help prepare us for a future where machines that mimic reasoning and human emotion refuse to be ignored.
“AI tutoring shows stunning results.” See below article.
Learning gains were striking The learning improvements were striking—about 0.3 standard deviations. To put this into perspective, this is equivalent to nearly two years of typical learning in just six weeks.When we compared these results to a database of education interventions studied through randomized controlled trials in the developing world, our program outperformed 80% of them, including some of the most cost-effective strategies like structured pedagogy and teaching at the right level. This achievement is particularly remarkable given the short duration of the program and the likelihood that our evaluation design underestimated the true impact.
… Our evaluation demonstrates the transformative potential of generative AI in classrooms, especially in developing contexts. To our knowledge, this is the first study to assess the impact of generative AI as a virtual tutor in such settings, building on promising evidence from other contexts and formats; for example, on AI in coding classes, AI and learning in one school in Turkey, teaching math with AI (an example through WhatsApp in Ghana), and AI as a homework tutor.
Why it matters: This represents one of the first rigorous studies showing major real-world impacts in a developing nation. The key appears to be using AI as a complement to teachers rather than a replacement — and results suggest that AI tutoring could help address the global learning crisis, particularly in regions with teacher shortages.
Other items re: AI in our learning ecosystems:
Will AI revolutionise marking? — from timeshighereducation.com by Rohim Mohammed Artificial intelligence has the potential to improve speed, consistency and detail in feedback for educators grading students’ assignments, writes Rohim Mohammed. Here he lists the pros and cons based on his experience
Personal AI— from michelleweise.substack.com by Dr. Michelle Weise “Personalized” Doesn’t Have To Be a Buzzword Today, however, is a different kind of moment. GenAI is now rapidly evolving to the point where we may be able to imagine a new way forward. We can begin to imagine solutions truly tailored for each of us as individuals, our own personal AI (pAI). pAI could unify various silos of information to construct far richer and more holistic and dynamic views of ourselves as long-life learners. A pAI could become our own personal career navigator, skills coach, and storytelling agent. Three particular areas emerge when we think about tapping into the richness of our own data:
This episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.
Key moments:
00:05 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 Rapid AI development raises questions about controlling its global application.
33:31 Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.
The team has summarized their primary contributions as follows.
The team has offered the first instance of a simple, scalable oversight technique that greatly assists humans in more thoroughly detecting problems in real-world RLHF data.
Within the ChatGPT and CriticGPT training pools, the team has discovered that critiques produced by CriticGPT catch more inserted bugs and are preferred above those written by human contractors.
Compared to human contractors working alone, this research indicates that teams consisting of critic models and human contractors generate more thorough criticisms. When compared to reviews generated exclusively by models, this partnership lowers the incidence of hallucinations.
This study provides Force Sampling Beam Search (FSBS), an inference-time sampling and scoring technique. This strategy well balances the trade-off between minimizing bogus concerns and discovering genuine faults in LLM-generated critiques.
a16z-backed Character.AI said today that it is now allowing users to talk to AI characters over calls. The feature currently supports multiple languages, including English, Spanish, Portuguese, Russian, Korean, Japanese and Chinese.
The startup tested the calling feature ahead of today’s public launch. During that time, it said that more than 3 million users had made over 20 million calls. The company also noted that calls with AI characters can be useful for practicing language skills, giving mock interviews, or adding them to the gameplay of role-playing games.
Google Translate can come in handy when you’re traveling or communicating with someone who speaks another language, and thanks to a new update, you can now connect with some 614 million more people. Google is adding 110 new languages to its Translate tool using its AI PaLM 2 large language model (LLM), which brings the total of supported languages to nearly 250. This follows the 24 languages added in 2022, including Indigenous languages of the Americas as well as those spoken across Africa and central Asia.
Gen-3 Alpha Text to Video is now available to everyone.
A new frontier for high-fidelity, fast and controllable video generation.
As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme. … 2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch. As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.
With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.
Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.
Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.
The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?
…
The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.
In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.
I have a column about AI and law in @thetimes today. Here are some brief extracts:
“The main social benefit of legal AI will not be in making lawyers more efficient but in empowering people who are not lawyers to handle their own legal affairs” …
Here, I’ll explore some ways that engineers and lawyers see the world differently based on their strengths and experiences, and I’ll explain how they can better communicate to build better software products, especially in AI, for attorneys. Ideally, this will lead to happier lawyers and more satisfied clients.
A groundbreaking legal tech startup, Zuputo, is set to reshape the legal landscape across Africa by making legal services more accessible, affordable, and user-friendly.
Founded by Jessie Abugre and Nana Adwoa Amponsah-Mensah, this women-led venture has become synonymous with simplicity and efficiency in legal solutions.
Under LeBlanc’s direction, SNHU has transformed from a small regional university to an internationally known leader in higher education, having grown from 2,500 students to more than 225,000 learners, making SNHU the largest nonprofit provider of higher education in the country. With his vision to make higher education more accessible, more than 200,000 students have earned their degrees during LeBlanc’s tenure at SNHU. The university also ranks among the most innovative universities in the country and as a top employer nationwide.
When the concept of student evaluations was first developed in the 1920s, by the psychologists Herman H. Remmers, at Purdue University, and Edwin R. Guthrie, at the University of Washington, administrators were never meant to have access to them. Remmers and Guthrie saw evaluations as modest tools for pedagogical improvement, not criteria of administrative judgment. In the 1950s, Guthrie warned about the misuse of evaluations. But no one listened. Instead, as Stroebe writes, they “soon became valued sources of information for university administrators, who used them as a basis for decisions about merit increases and promotion.” Is it too late to return to Remmers and Guthrie’s original conception?
6. ChatGPT’s hype will fade, as a new generation of tailor-made bots rises up
11. We’ll finally turn the corner on teacher pay in 2024
21. Employers will combat job applicants’ use of AI with…more AI
31. Universities will view the creator economy as a viable career path
Artificial intelligence is disrupting higher education — from itweb.co.za by Rennie Naidoo; via GSV Traditional contact universities need to adapt faster and find creative ways of exploring and exploiting AI, or lose their dominant position.
Higher education professionals have a responsibility to shape AI as a force for good.
Introducing Canva’s biggest education launch — from canva.com We’re thrilled to unveil our biggest education product launch ever. Today, we’re introducing a whole new suite of products that turn Canva into the all-in-one classroom tool educators have been waiting for.
Also seeCanva for Education. Create and personalize lesson plans, infographics,
posters, video, and more. 100% free for
teachers and students at eligible schools.
ChatGPT and generative AI: 25 applications to support student engagement — from timeshighereducation.com by Seb Dianati and Suman Laudari In the fourth part of their series looking at 100 ways to use ChatGPT in higher education, Seb Dianati and Suman Laudari share 25 prompts for the AI tool to boost student engagement
There are two ways to use ChatGPT — from theneurondaily.com
Type to it.
Talk to it (new).
… Since then, we’ve looked to it for a variety of real-world business advice. For example, Prof Ethan Mollick posted a great guide using ChatGPT-4 with voice as a negotiation instructor.
In a similar fashion, you can consult ChatGPT with voice for feedback on:
Job interviews.
Team meetings.
Business presentations.
With a prompt, GPT-4 with voice does a pretty good job of acting as a negotiation simulator/instructor. It is not all the way there, but as someone who builds educational simulations, I can tell you this is already impressively far along towards an effective teaching tool.… pic.twitter.com/IphPHF95cL
Via The Rundown:Google is using AI to analyze the company’s Maps data and suggest adjustments to traffic light timing — aiming to cut driver waits, stops, and emissions.
The camera never lies. Except, of course, it does – and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.
Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.
Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.
Still using AI to help you mark a student’s work?
Mark a full class’s worth with one prompt.
Here’s the Whole Class Feedback Giant Prompt.
Comment & retweet & I’ll DM it to you.
Discover why thousands of teachers subscribe to the Sunday AI Educator ?https://t.co/ivpXYyWNzN
— Dan Fitzpatrick – The AI Educator (@theaieducatorX) October 22, 2023
Dr. Chris Dede, of Harvard University and Co-PI of the National AI Institute for Adult Learning and Online Education, spoke about the differences between knowledge and wisdom in AI-human interactions in a keynote address at the 2022 Empowering Learners for the Age of AI conference. He drew a parallel between Star Trek: The Next Generation characters Data and Picard during complex problem-solving: While Data offers the knowledge and information, Captain Picard offers the wisdom and context from on a leadership mantle, and determines its relevance, timing, and application.
This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this:
“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”
We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.
How to Make an AI-Generated Film — from heatherbcooper.substack.com by Heather Cooper Plus, Midjourney finally has a new upscale tool!
From DSC: I’m not excited about this, as I can’t help but wonder…how long before the militaries of the world introduce this into their warfare schemes and strategies?
The idea was simple: ask sixty community leaders to fan across the city’s public schools, follow in the footsteps of its youngest citizens, and report back on what they saw.
Fifty-nine said yes. What they found, Pickering says, “were kids with dead eyes. Kids not engaged. And kids who knew that school was a game – and the game was rigged.”
So the Billy Madison team used its findings to design a prospective high school that would actually produce what its participants said they wanted to see:
Let kids pursue their passions. Give them real work to do. And get them out of the school building, and in the community.
This thought-provoking discussion delves into the topic of system replacement in education. Is school transformation possible without replacing the existing education system? Joining [Michael] to discuss the question are Thomas Arnett of the Christensen Institute and Kelly Young of Education Reimagined.
In an educational landscape that constantly seeks marginal improvements, [Michael’s] guests speak to the importance of embracing new value networks that support innovative approaches to learning. They bring to light the issue of programs that remain niche solutions, rather than robust, learner-centered alternatives. In exploring the concept of value networks, [Michael’s] guests challenge the notion of transforming individual schools or districts alone. They argue for the creation of a new value network to truly revolutionize the education system. Of course, they admit that achieving this is no small feat, as it requires a paradigm shift in mindset and a careful balance between innovation and existing structures. In this conversation, we wrestle with the full implications of their findings and more.
From DSC: This reminds me of the importance of TrimTab Groups who invent or test out something new apart from the mothership.
The 2023 GEM Report on technology and education explores these debates, examining education challenges to which appropriate use of technology can offer solutions (access, equity and inclusion; quality; technology advancement; system management), while recognizing that many solutions proposed may also be detrimental.
The report also explores three system-wide conditions (access to technology, governance regulation, and teacher preparation) that need to be met for any technology in education to reach its full potential.
Bloom Academy is the first and only self-directed learning center in Las Vegas – microschooling as true, nontraditional and permissionless education alternative. 5 Questions with Microschool Founders Sarah & Yamila.https://t.co/RvxtwGXvkZ
Since last spring, journalists at The 74 have been crossing the U.S. as part of our 2023 High School Road Trip. It has embraced both emerging and established high school models, taking us to 13 schools from Rhode Island to California, Arizona to South Carolina, and in between.
It has brought us face-to-face with innovation, with programs that promote everything from nursing to aerospace to maritime-themed careers.
At each school, educators seem to be asking one key question: What if we could start over and try something totally new?
What we’ve found represents just a small sample of the incredible diversity that U.S. high schools now offer, but we’re noticing a few striking similarities that educators in these schools, free to experiment with new models, now share. Here are the top eight: .
What does it take to empower parents and decentralize schooling? Why is a diversity of school models important to parents? Are we at a tipping point? .
Several meta-analyses, which summarize the evidence from many studies, have found higher achievement when students take quizzes instead of, say, reviewing notes or rereading a book chapter. “There’s decades and decades of research showing that taking practice tests will actually improve your learning,” said David Shanks, a professor of psychology and deputy dean of the Faculty of Brain Sciences at University College London.
Still, many students get overwhelmed during tests. Shanks and a team of four researchers wanted to find out whether quizzes exacerbate test anxiety. The team collected 24 studies that measured students’ test anxiety and found that, on average, practice tests and quizzes not only improved academic achievement, but also ended up reducing test anxiety. Their meta-analysis was published in Educational Psychology Review in August 2023.
The End of Scantron Tests— from theatlantic.com by Matteo Wong Machine-graded bubble sheets are the defining feature of American schools. Today’s kindergartners may never have to fill one out.
There are several possible reasons why pretesting worked in this study.
Students paid more attention to the pretested material during the lecture.
The pretest activated prior knowledge (some of them are clearly doing a lot of prework), and allowed them to encode the new information more deeply.
They were doing a lot of studying of the pretested information outside of class.
There are some great spaced retrieval effects going on. That is, students saw the material before lecture, they took a quiz on it during the pretest, then later they reviewed or quizzed themselves on that same material again during self-study.
Partnership with American Journalism Project to support local news — from openai.com; via The Rundown AI A new $5+ million partnership aims to explore ways the development of artificial intelligence (AI) can support a thriving, innovative local news field, and ensure local news organizations shape the future of this emerging technology.
SEC’s Gensler Warns AI Risks Financial Stability— from bloomberg.com by Lydia Beyoud; via The Brainyacts SEC on lookout for fraud, conflicts of interest, chair says | Gensler cautions companies touting AI in corporate docs
The recent petition from Kenyan workers who engage in content moderation for OpenAI’s ChatGPT, via the intermediary company Sama, has opened a new discussion in the global legal market. This dialogue surrounds the concept of “harmful and dangerous technology work” and its implications for laws and regulations within the expansive field of AI development and deployment.
The petition, asking for investigations into the working conditions and operations of big tech companies outsourcing services in Kenya, is notable not just for its immediate context but also for the broader legal issues it raises. Central among these is the notion of “harmful and dangerous technology work,” a term that encapsulates the uniquely modern form of labor involved in developing and ensuring the safety of AI systems.
The most junior data labelers, or agents, earned a basic salary of 21,000 Kenyan shillings ($170) per month, with monthly bonuses and commissions for meeting performance targets that could elevate their hourly rate to just $1.44 – a far cry from the $12.50 hourly rate that OpenAI paid Sama for their work. This discrepancy raises crucial questions about the fair distribution of economic benefits in the AI value chain.
It’s become so difficult to track AI tools as they are revealed. I’ve decided to create a running list of tools as I find out about them. The list is in alphabetical order even though there are classification systems that I’ve seen others use. Although it’s not good in blogging land to update posts, I’ll change the date every time that I update this list. Please feel free to respond to me with your comments about any of these as well as AI tools that you use that I do not have on the list. I’ll post your comments next to a tool when appropriate. Thanks.
Claude has surprising capabilities, including a couple you won’t find in the free version of ChatGPT.
Since this new AI bot launched on July 11, I’ve found Claude useful for summarizing long transcripts, clarifying complex writings, and generating lists of ideas and questions. It also helps me put unstructured notes into orderly tables. For some things, I prefer Claude to ChatGPT. Read on for Claude’s strengths and limitations, and ideas for using it creatively.
Claude’s free version allows you to attach documents for analysis. ChatGPT’s doesn’t.
Large language models like GPT-4 have taken the world by storm thanks to their astonishing command of natural language. Yet the most significant long-term opportunity for LLMs will entail an entirely different type of language: the language of biology.
In the near term, the most compelling opportunity to apply large language models in the life sciences is to design novel proteins.